All Episodes

BI 188 Jolande Fooken: Coordinating Action and Perception

BI 188 Jolande Fooken: Coordinating Action and Perception

Brain Inspired
BI 188 Jolande Fooken: Coordinating Action and Perception
Loading
/

Jolande Fooken is a post-postdoctoral researcher interested in how we move our eyes and move our hands together to accomplish naturalistic tasks. Hand-eye coordination is one of those things that sounds simple and we do it all the time to make meals for our children day in, and day out, and day in, and day out.

BI 187: COSYNE 2024 Neuro-AI Panel

BI 187: COSYNE 2024 Neuro-AI Panel

Brain Inspired
BI 187: COSYNE 2024 Neuro-AI Panel
Loading
/

Recently I was invited to moderate a panel at the annual Computational and Systems Neuroscience, or COSYNE, conference. This year was the 20th anniversary of COSYNE, and we were in Lisbon Porturgal.

BI 186 Mazviita Chirimuuta: The Brain Abstracted

BI 186 Mazviita Chirimuuta: The Brain Abstracted

Brain Inspired
BI 186 Mazviita Chirimuuta: The Brain Abstracted
Loading
/

Mazviita Chirimuuta is a philosopher at the University of Edinburgh. Today we discuss topics from her new book, The Brain Abstracted: Simplification in the History and Philosophy of Neuroscience.

BI 185 Eric Yttri: Orchestrating Behavior

BI 185 Eric Yttri: Orchestrating Behavior

Brain Inspired
BI 185 Eric Yttri: Orchestrating Behavior
Loading
/

Eric’s lab studies the relationship between various kinds of behaviors and the neural activity in a few areas known to be involved in enacting and shaping those behaviors, namely the motor cortex and basal ganglia.  And study that, he uses tools like optogentics, neuronal recordings, and stimulations, while mice perform certain tasks, or, in my case, while they freely behave wandering around an enclosed space.

BI 184 Peter Stratton: Synthesize Neural Principles

BI 184 Peter Stratton: Synthesize Neural Principles

Brain Inspired
BI 184 Peter Stratton: Synthesize Neural Principles
Loading
/

What Pete argues for is what he calls a sideways-in approach. So a bottom-up approach is to build things like we find them in the brain, put them together, and voila, we’ll get cognition. A top-down approach, the current approach in AI, is to train a system to perform a task, give it some algorithms to run, and fiddle with the architecture and lower level details until you pass your favorite benchmark test.

BI 183 Dan Goodman: Neural Reckoning

BI 183 Dan Goodman: Neural Reckoning

Brain Inspired
BI 183 Dan Goodman: Neural Reckoning
Loading
/

You may know my guest as the co-founder of Neuromatch, the excellent online computational neuroscience academy, or as the creator of the Brian spiking neural network simulator, which is freely available. I know him as a spiking neural network practitioner extraordinaire. Dan Goodman runs the Neural Reckoning Group at Imperial College London, where they use spiking neural networks to figure out how biological and artificial brains reckon, or compute.

BI 182: John Krakauer Returns… Again

BI 182: John Krakauer Returns… Again

Brain Inspired
BI 182: John Krakauer Returns… Again
Loading
/

John Krakauer has been on the podcast multiple times (see links below). Today we discuss some topics framed around what he’s been working on and thinking about lately.

BI 181 Max Bennett: A Brief History of Intelligence

BI 181 Max Bennett: A Brief History of Intelligence

Brain Inspired
BI 181 Max Bennett: A Brief History of Intelligence
Loading
/

By day, Max Bennett is an entrepreneur. He has cofounded and CEO’d multiple AI and technology companies. By many other countless hours, he has studied brain related sciences. Those long hours of research have payed off in the form of this book, A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains.

BI 179 Laura Gradowski: Include the Fringe with Pluralism

BI 179 Laura Gradowski: Include the Fringe with Pluralism

Brain Inspired
BI 179 Laura Gradowski: Include the Fringe with Pluralism
Loading
/

Laura Gradowski is a philosopher of science at the University of Pittsburgh. Pluralism, or scientific pluralism anyway, is roughly the idea that there is no unified account of any scientific field, that we should be tolerant of and welcome a variety of theoretical and conceptual frameworks, and methods, and goals, when doing science. Pluralism is kind of a buzz word right now in my little neuroscience world, but it’s an old and well-trodden notion… many philosophers have been calling for pluralism for many years. But how pluralistic should we be in our studies and explanations in science? Laura suggests we should be very, very pluralistic, and to make her case, she cites examples in the history of science of theories and theorists that were once considered “fringe” but went on to become mainstream accepted theoretical frameworks. I thought it would be fun to have her on to share her ideas about fringe theories, mainstream theories, pluralism, etc.

BI 178 Eric Shea-Brown: Neural Dynamics and Dimensions

BI 178 Eric Shea-Brown: Neural Dynamics and Dimensions

Brain Inspired
BI 178 Eric Shea-Brown: Neural Dynamics and Dimensions
Loading
/

Eric Shea-Brown is a theoretical neuroscientist and principle investigator of the working group on neural dynamics at the University of Washington. In this episode, we talk a lot about dynamics and dimensionality in neural networks… how to think about them, why they matter, how Eric’s perspectives have changed through his career.

BI 177 Special: Bernstein Workshop Panel

BI 177 Special: Bernstein Workshop Panel

Brain Inspired
BI 177 Special: Bernstein Workshop Panel
Loading
/

I was recently invited to moderate a panel at the Annual Bernstein conference – this one was in Berlin Germany. The panel I moderated was at a satellite workshop at the conference called How can machine learning be used to generate insights and theories in neuroscience? Below are the panelists. I hope you enjoy the discussion!

BI 176 David Poeppel Returns

BI 176 David Poeppel Returns

Brain Inspired
BI 176 David Poeppel Returns
Loading
/

David runs his lab at NYU, where they study auditory cognition, speech perception, language, and music. On the heels of the episode with David Glanzman, we discuss the ongoing mystery regarding how memory works, how to study and think about brains and minds, and the reemergence (perhaps) of the language of thought hypothesis.

BI 175 Kevin Mitchell: Free Agents

BI 175 Kevin Mitchell: Free Agents

Brain Inspired
BI 175 Kevin Mitchell: Free Agents
Loading
/

Kevin Mitchell is professor of genetics at Trinity College Dublin. He’s been on the podcast before, and we talked a little about his previous book, Innate – How the Wiring of Our Brains Shapes Who We Are. He’s back today to discuss his new book Free Agents: How Evolution Gave Us Free Will

BI 174 Alicia Juarrero: Context Changes Everything

BI 174 Alicia Juarrero: Context Changes Everything

Brain Inspired
BI 174 Alicia Juarrero: Context Changes Everything
Loading
/

In this episode, we discuss many of the topics and ideas in her new book, Context Changes Everything: How Constraints Create Coherence, which makes the thorough case that constraints should be given way more attention when trying to understand complex systems like brains and minds – how they’re organized, how they operate, how they’re formed and maintained, and so on.

BI 173 Justin Wood: Origins of Visual Intelligence

BI 173 Justin Wood: Origins of Visual Intelligence

Brain Inspired
BI 173 Justin Wood: Origins of Visual Intelligence
Loading
/

ustin Wood runs the Wood Lab at Indiana University, and his lab’s tagline is “building newborn minds in virtual worlds.” In this episode, we discuss his work comparing the visual cognition of newborn chicks and AI models. He uses a controlled-rearing technique with natural chicks, whereby the chicks are raised from birth in completely controlled visual environments.

BI 172 David Glanzman: Memory All The Way Down

BI 172 David Glanzman: Memory All The Way Down

Brain Inspired
BI 172 David Glanzman: Memory All The Way Down
Loading
/

David runs his lab at UCLA where he’s also a distinguished professor.  David used to believe what is currently the mainstream view, that our memories are stored in our synapses, those connections between our neurons.  So as we learn, the synaptic connections strengthen and weaken until their just right, and that serves to preserve the memory. That’s been the dominant view in neuroscience for decades, and is the fundamental principle that underlies basically all of deep learning in AI. But because of his own and others experiments, which he describes in this episode, David has come to the conclusion that memory must be stored not at the synapse, but in the nucleus of neurons, likely by some epigenetic mechanism mediated by RNA molecules. If this sounds familiar, I had Randy Gallistel on the the podcast on episode 126 to discuss similar ideas, and David discusses where he and Randy differ in their thoughts. This episode starts out pretty technical as David describes the series of experiments that changed his mind, but after that we broaden our discussion to a lot of the surrounding issues regarding whether and if his story about memory is true. And we discuss meta-issues like how old discarded ideas in science often find their way back, what it’s like studying non-mainstream topic, including challenges trying to get funded for it, and so on.

BI 171 Mike Frank: Early Language and Cognition

BI 171 Mike Frank: Early Language and Cognition

Brain Inspired
BI 171 Mike Frank: Early Language and Cognition
Loading
/

My guest is Michael C. Frank, better known as Mike Frank, who runs the Language and Cognition lab at Stanford. Mike’s main interests center on how children learn language – in particular he focuses a lot on early word learning, and what that tells us about our other cognitive functions, like concept formation and social cognition.
We discuss that, his love for developing open data sets that anyone can use,
The dance he dances between bottom-up data-driven approaches in this big data era, traditional experimental approaches, and top-down theory-driven approaches
How early language learning in children differs from LLM learning
Mike’s rational speech act model of language use, which considers the intentions or pragmatics of speakers and listeners in dialogue.