All Episodes

BI 184 Peter Stratton: Synthesize Neural Principles

BI 184 Peter Stratton: Synthesize Neural Principles

Brain Inspired
Brain Inspired
BI 184 Peter Stratton: Synthesize Neural Principles
Loading
/

What Pete argues for is what he calls a sideways-in approach. So a bottom-up approach is to build things like we find them in the brain, put them together, and voila, we’ll get cognition. A top-down approach, the current approach in AI, is to train a system to perform a task, give it some algorithms to run, and fiddle with the architecture and lower level details until you pass your favorite benchmark test.

BI 183 Dan Goodman: Neural Reckoning

BI 183 Dan Goodman: Neural Reckoning

Brain Inspired
Brain Inspired
BI 183 Dan Goodman: Neural Reckoning
Loading
/

You may know my guest as the co-founder of Neuromatch, the excellent online computational neuroscience academy, or as the creator of the Brian spiking neural network simulator, which is freely available. I know him as a spiking neural network practitioner extraordinaire. Dan Goodman runs the Neural Reckoning Group at Imperial College London, where they use spiking neural networks to figure out how biological and artificial brains reckon, or compute.

BI 182: John Krakauer Returns… Again

BI 182: John Krakauer Returns… Again

Brain Inspired
Brain Inspired
BI 182: John Krakauer Returns… Again
Loading
/

John Krakauer has been on the podcast multiple times (see links below). Today we discuss some topics framed around what he’s been working on and thinking about lately.

BI 181 Max Bennett: A Brief History of Intelligence

BI 181 Max Bennett: A Brief History of Intelligence

Brain Inspired
Brain Inspired
BI 181 Max Bennett: A Brief History of Intelligence
Loading
/

By day, Max Bennett is an entrepreneur. He has cofounded and CEO’d multiple AI and technology companies. By many other countless hours, he has studied brain related sciences. Those long hours of research have payed off in the form of this book, A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains.

BI 179 Laura Gradowski: Include the Fringe with Pluralism

BI 179 Laura Gradowski: Include the Fringe with Pluralism

Brain Inspired
Brain Inspired
BI 179 Laura Gradowski: Include the Fringe with Pluralism
Loading
/

Laura Gradowski is a philosopher of science at the University of Pittsburgh. Pluralism, or scientific pluralism anyway, is roughly the idea that there is no unified account of any scientific field, that we should be tolerant of and welcome a variety of theoretical and conceptual frameworks, and methods, and goals, when doing science. Pluralism is kind of a buzz word right now in my little neuroscience world, but it’s an old and well-trodden notion… many philosophers have been calling for pluralism for many years. But how pluralistic should we be in our studies and explanations in science? Laura suggests we should be very, very pluralistic, and to make her case, she cites examples in the history of science of theories and theorists that were once considered “fringe” but went on to become mainstream accepted theoretical frameworks. I thought it would be fun to have her on to share her ideas about fringe theories, mainstream theories, pluralism, etc.

BI 178 Eric Shea-Brown: Neural Dynamics and Dimensions

BI 178 Eric Shea-Brown: Neural Dynamics and Dimensions

Brain Inspired
Brain Inspired
BI 178 Eric Shea-Brown: Neural Dynamics and Dimensions
Loading
/

Eric Shea-Brown is a theoretical neuroscientist and principle investigator of the working group on neural dynamics at the University of Washington. In this episode, we talk a lot about dynamics and dimensionality in neural networks… how to think about them, why they matter, how Eric’s perspectives have changed through his career.

BI 177 Special: Bernstein Workshop Panel

BI 177 Special: Bernstein Workshop Panel

Brain Inspired
Brain Inspired
BI 177 Special: Bernstein Workshop Panel
Loading
/

I was recently invited to moderate a panel at the Annual Bernstein conference – this one was in Berlin Germany. The panel I moderated was at a satellite workshop at the conference called How can machine learning be used to generate insights and theories in neuroscience? Below are the panelists. I hope you enjoy the discussion!

BI 176 David Poeppel Returns

BI 176 David Poeppel Returns

Brain Inspired
Brain Inspired
BI 176 David Poeppel Returns
Loading
/

David runs his lab at NYU, where they study auditory cognition, speech perception, language, and music. On the heels of the episode with David Glanzman, we discuss the ongoing mystery regarding how memory works, how to study and think about brains and minds, and the reemergence (perhaps) of the language of thought hypothesis.

BI 175 Kevin Mitchell: Free Agents

BI 175 Kevin Mitchell: Free Agents

Brain Inspired
Brain Inspired
BI 175 Kevin Mitchell: Free Agents
Loading
/

Kevin Mitchell is professor of genetics at Trinity College Dublin. He’s been on the podcast before, and we talked a little about his previous book, Innate – How the Wiring of Our Brains Shapes Who We Are. He’s back today to discuss his new book Free Agents: How Evolution Gave Us Free Will

BI 174 Alicia Juarrero: Context Changes Everything

BI 174 Alicia Juarrero: Context Changes Everything

Brain Inspired
Brain Inspired
BI 174 Alicia Juarrero: Context Changes Everything
Loading
/

In this episode, we discuss many of the topics and ideas in her new book, Context Changes Everything: How Constraints Create Coherence, which makes the thorough case that constraints should be given way more attention when trying to understand complex systems like brains and minds – how they’re organized, how they operate, how they’re formed and maintained, and so on.

BI 173 Justin Wood: Origins of Visual Intelligence

BI 173 Justin Wood: Origins of Visual Intelligence

Brain Inspired
Brain Inspired
BI 173 Justin Wood: Origins of Visual Intelligence
Loading
/

ustin Wood runs the Wood Lab at Indiana University, and his lab’s tagline is “building newborn minds in virtual worlds.” In this episode, we discuss his work comparing the visual cognition of newborn chicks and AI models. He uses a controlled-rearing technique with natural chicks, whereby the chicks are raised from birth in completely controlled visual environments.

BI 172 David Glanzman: Memory All The Way Down

BI 172 David Glanzman: Memory All The Way Down

Brain Inspired
Brain Inspired
BI 172 David Glanzman: Memory All The Way Down
Loading
/

David runs his lab at UCLA where he’s also a distinguished professor.  David used to believe what is currently the mainstream view, that our memories are stored in our synapses, those connections between our neurons.  So as we learn, the synaptic connections strengthen and weaken until their just right, and that serves to preserve the memory. That’s been the dominant view in neuroscience for decades, and is the fundamental principle that underlies basically all of deep learning in AI. But because of his own and others experiments, which he describes in this episode, David has come to the conclusion that memory must be stored not at the synapse, but in the nucleus of neurons, likely by some epigenetic mechanism mediated by RNA molecules. If this sounds familiar, I had Randy Gallistel on the the podcast on episode 126 to discuss similar ideas, and David discusses where he and Randy differ in their thoughts. This episode starts out pretty technical as David describes the series of experiments that changed his mind, but after that we broaden our discussion to a lot of the surrounding issues regarding whether and if his story about memory is true. And we discuss meta-issues like how old discarded ideas in science often find their way back, what it’s like studying non-mainstream topic, including challenges trying to get funded for it, and so on.

BI 171 Mike Frank: Early Language and Cognition

BI 171 Mike Frank: Early Language and Cognition

Brain Inspired
Brain Inspired
BI 171 Mike Frank: Early Language and Cognition
Loading
/

My guest is Michael C. Frank, better known as Mike Frank, who runs the Language and Cognition lab at Stanford. Mike’s main interests center on how children learn language – in particular he focuses a lot on early word learning, and what that tells us about our other cognitive functions, like concept formation and social cognition.
We discuss that, his love for developing open data sets that anyone can use,
The dance he dances between bottom-up data-driven approaches in this big data era, traditional experimental approaches, and top-down theory-driven approaches
How early language learning in children differs from LLM learning
Mike’s rational speech act model of language use, which considers the intentions or pragmatics of speakers and listeners in dialogue.

BI 170 Ali Mohebi: Starting a Research Lab

BI 170 Ali Mohebi: Starting a Research Lab

Brain Inspired
Brain Inspired
BI 170 Ali Mohebi: Starting a Research Lab
Loading
/

In this episode I have a casual chat with Ali Mohebi about his new faculty position and his plans for the future.

BI 169 Andrea Martin: Neural Dynamics and Language

BI 169 Andrea Martin: Neural Dynamics and Language

Brain Inspired
Brain Inspired
BI 169 Andrea Martin: Neural Dynamics and Language
Loading
/

My guest today is Andrea Martin, who is the Research Group Leader in the department of Language and Computation in Neural Systems at the Max Plank Institute and the Donders Institute. Andrea is deeply interested in understanding how our biological brains process and represent language. To this end, she is developing a theoretical model of language. The aim of the model is to account for the properties of language, like its structure, its compositionality, its infinite expressibility, while adhering to physiological data we can measure from human brains.

BI 168 Frauke Sandig and Eric Black w Alex Gomez-Marin: AWARE: Glimpses of Consciousness

BI 168 Frauke Sandig and Eric Black w Alex Gomez-Marin: AWARE: Glimpses of Consciousness

Brain Inspired
Brain Inspired
BI 168 Frauke Sandig and Eric Black w Alex Gomez-Marin: AWARE: Glimpses of Consciousness
Loading
/

This is the first in a mini-series of episodes with Alex Gomez-Marin, exploring how the arts and humanities can impact (neuro)science. Artistic creations, like cinema, have the ability to momentarily lower our ever-critical scientific mindset and allow us to imagine alternate possibilities and experience emotions outside our normal scientific routines. Might this feature of art potentially change our scientific attitudes and perspectives?

BI 167 Panayiota Poirazi: AI Brains Need Dendrites

BI 167 Panayiota Poirazi: AI Brains Need Dendrites

Brain Inspired
Brain Inspired
BI 167 Panayiota Poirazi: AI Brains Need Dendrites
Loading
/

Panayiota Poirazi runs the Poirazi Lab at the FORTH Institute of Molecular Biology and Biotechnology, and Yiota loves dendrites, those branching tree-like structures sticking out of all your neurons, and she thinks you should love dendrites, too, whether you study biological or artificial intelligence. In neuroscience, the old story was that dendrites just reach out and collect incoming signals for the all-important neuron cell body to process. Yiota, and people Like Matthew Larkum, with whom I chatted in episode 138, are continuing to demonstrate that dendrites are themselves computationally complex and powerful, doing many varieties of important signal transformation before signals reach the cell body. For example, in 2003, Yiota showed that because of dendrites, a single neuron can act as a two-layer artificial neural network, and since then others have shown single neurons can act as deeper and deeper multi-layer networks.  In Yiota’s opinion, an even more important function of dendrites is increased computing efficiency, something evolution favors and something artificial networks need to favor as well moving forward.