
Right now, the tech world is caught in an endless loop of throwing massive compute power at Large Language Models, hoping brute force will magically spark Artificial General Intelligence (AGI). But what if the foundational computing architecture is entirely wrong?
In this episode, we sit down with Ian Hamilton, CEO of Synthetic Cognition Labs, who is walking away from standard models to build true AGI.
Ian dismantles complex ideas, detailing why current AI is essentially faking memory and why the path forward lies in hyperdimensional computing. By exploring the friction between biology and technology, we examine how mapping the neural networks of a fruit fly provides a better roadmap for continuous learning than a billion-dollar GPU cluster. You’ll learn the critical difference between LLM tokenization and human “analogy-making,” and why breaking the AI scale monopoly might require us to nuke everything we know about computing and start over.
If you are tired of the AI buzzword salad and want to decode the future, this is your blueprint.
Follow Ian: https://www.linkedin.com/in/ianchamilton1/
Check out: https://syntheticcognitionlabs.com/
Watch us on YouTube: https://www.youtube.com/watch?v=Rd0SpOb5gMo
Time Stamps:
(00:00) Preview
(00:58) Ian’s introduction
(04:15) Why static LLMs fail at continuous learning
(07:14) The coding loop and AI memory walls
(16:30) Hyperdimensional computing and non-Von Neumann architecture
(17:44) Biological inspiration from fruit fly neural networks
(24:00) Sparse distributed memory and human-like analogy
(39:20) Bridging the hardware gap with LLM emulation
(51:40) The danger of the AI scaling monopoly
Support the pod:
https://3reate.com
https://ko-fi.com/3reate
https://patreon.com/3reate
Listen:
https://podcasts.apple.com/us/podcast/3reate/id1723426314 https://open.spotify.com/show/48Y2M7Ppja43Uq2wlyUtPF https://youtu.be/2wEMD8EvB9I?si=G3iUBE-z4Mx0Ng-Y