All Episodes

BI 132 Ila Fiete: A Grid Scaffold for Memory

BI 132 Ila Fiete: A Grid Scaffold for Memory

Brain Inspired
Brain Inspired
BI 132 Ila Fiete: A Grid Scaffold for Memory
Loading
/

Ila discusses her theoretical neuroscience work suggesting how our memories are formed within the cognitive maps we use to navigate the world and navigate our thoughts. The main idea is that grid cell networks in the entorhinal cortex internally generate a structured scaffold, which gets sent to the hippocampus. Neurons in the hippocampus, like the well-known place cells, receive that scaffolding and also receive external signals from the neocortex- signals about what’s happening in the world and in our thoughts. Thus, the place cells act to “pin” what’s happening in our neocortex to the scaffold, forming a memory. We also discuss her background as a physicist and her approach as a “neurophysicist”, and a review she’s publishing all about the many brain areas and cognitive functions being explained as attractor landscapes within a dynamical systems framework.

BI 131 Sri Ramaswamy and Jie Mei: Neuromodulation-aware DNNs

BI 131 Sri Ramaswamy and Jie Mei: Neuromodulation-aware DNNs

Brain Inspired
Brain Inspired
BI 131 Sri Ramaswamy and Jie Mei: Neuromodulation-aware DNNs
Loading
/

Sri and Mei join me to discuss how including principles of neuromodulation in deep learning networks may improve network performance. It’s an ever-present question how much detail to include in models, and we are in the early stages of learning how neuromodulators and their interactions shape biological brain function. But as we continue to learn more, Sri and Mei are interested in building “neuromodulation-aware DNNs”.

BI 130 Eve Marder: Modulation of Networks

BI 130 Eve Marder: Modulation of Networks

Brain Inspired
Brain Inspired
BI 130 Eve Marder: Modulation of Networks
Loading
/

Eve discusses many of the lessons she has learned studying a small nervous system, the crustacean stomatogastric nervous system (STG). The STG has only about 30 neurons and its connections and neurophysiology are well-understood. Yet Eve’s work has shown it functions under a remarkable diversity of conditions, and does so is a remarkable variety of ways. We discuss her work on the STG specifically, and what her work implies about trying to study much larger nervous systems, like our human brains.

BI 129 Patryk Laurent: Learning from the Real World

BI 129 Patryk Laurent: Learning from the Real World

Brain Inspired
Brain Inspired
BI 129 Patryk Laurent: Learning from the Real World
Loading
/

Patryk and I discuss his wide-ranging background working in both the neuroscience and AI worlds, and his resultant perspective on what’s needed to move forward in AI, including some principles of brain processes that are more and less important. We also discuss his own work using some of those principles to help deep learning generalize to better capture how humans behave in and perceive the world.

BI 128 Hakwan Lau: In Consciousness We Trust

BI 128 Hakwan Lau: In Consciousness We Trust

Brain Inspired
Brain Inspired
BI 128 Hakwan Lau: In Consciousness We Trust
Loading
/

Hakwan and I discuss many of the topics in his new book, In Consciousness we Trust: The Cognitive Neuroscience of Subjective Experience. Hakwan describes his perceptual reality monitoring theory of consciousness, which suggests consciousness may act as a systems check between our sensory perceptions and higher cognitive functions. We also discuss his latest thoughts on mental quality space and how it relates to perceptual reality monitoring. Among many other topics, we chat about the many confounds and challenges to empirically studying consciousness, a topic featured heavily in the first half of his book. Hakwan was on a previous episode with Steve Fleming, BI 099 Hakwan Lau and Steve Fleming: Neuro-AI Consciousness.

BI 127 Tomás Ryan: Memory, Instinct, and Forgetting

BI 127 Tomás Ryan: Memory, Instinct, and Forgetting

Brain Inspired
Brain Inspired
BI 127 Tomás Ryan: Memory, Instinct, and Forgetting
Loading
/

Tomás and I discuss his research and ideas on how memories are encoded (the engram), the role of forgetting, and the overlapping mechanisms of memory and instinct. Tomás uses otpogenetics and other techniques to label and control neurons involved in learning and memory, and has shown that forgotten memories can be restored by stimulating “engram cells” originally associated with the forgotten memory. This line of research has led Tomás to think forgetting might be a learning mechanism itself, a adaption our brains make based on the predictability and affordances of the environment. His work on engrams has also led Tomás to think our instincts (ingrams) may share the same mechanism of our memories (engrams), and that memories may transition to instincts across generations.

BI 126 Randy Gallistel: Where Is the Engram?

BI 126 Randy Gallistel: Where Is the Engram?

Brain Inspired
Brain Inspired
BI 126 Randy Gallistel: Where Is the Engram?
Loading
/

Randy and I discuss his long-standing interest in how the brain stores information to compute. That is, where is the engram, the physical trace of memory in the brain? Modern neuroscience is dominated by the view that memories are stored among synaptic connections in populations of neurons. Randy believes a more reasonable and reliable way to store abstract symbols, like numbers, is to write them into code within individual neurons. Thus, the spiking code, whatever it is, functions to write and read memories into and out of intracellular substrates, like polynucleotides (DNA, RNA, e.g.). He lays out his case in detail in his book with Adam King, Memory and the Computational Brain: Why Cognitive Science will Transform Neuroscience. We also talk about some research and theoretical work since then that support his views.

BI 125 Doris Tsao, Tony Zador, Blake Richards: NAISys

BI 125 Doris Tsao, Tony Zador, Blake Richards: NAISys

Brain Inspired
Brain Inspired
BI 125 Doris Tsao, Tony Zador, Blake Richards: NAISys
Loading
/

Doris, Tony, and Blake are the organizers for this year’s NAISys conference, From Neuroscience to Artificially Intelligent Systems (NAISys), at Cold Spring Harbor. We discuss the conference itself, some history of the neuroscience and AI interface, their current research interests, and a handful of topics around evolution, innateness, development, learning, and the current and future prospects for using neuroscience to inspire new ideas in artificial intelligence.

BI 124 Peter Robin Hiesinger: The Self-Assembling Brain

BI 124 Peter Robin Hiesinger: The Self-Assembling Brain

Brain Inspired
Brain Inspired
BI 124 Peter Robin Hiesinger: The Self-Assembling Brain
Loading
/

Robin and I discuss many of the ideas in his book The Self-Assembling Brain: How Neural Networks Grow Smarter. The premise is that our DNA encodes an algorithmic growth process that unfolds information via time and energy, resulting in a connected neural network (our brains!) imbued with vast amounts of information from the “start”. This contrasts with modern deep learning networks, which start with minimal initial information in their connectivity, and instead rely almost solely on learning to gain their function. Robin suggests we won’t be able to create anything with close to human-like intelligence unless we build in an algorithmic growth process and an evolutionary selection process to create artificial networks.

BI 123 Irina Rish: Continual Learning

BI 123 Irina Rish: Continual Learning

Brain Inspired
Brain Inspired
BI 123 Irina Rish: Continual Learning
Loading
/

Irina is a faculty member at MILA-Quebec AI Institute and a professor at Université de Montréal. She has worked from both ends of the neuroscience/AI interface, using AI for neuroscience applications, and using neural principles to help improve AI. We discuss her work on biologically-plausible alternatives to back-propagation, using “auxiliary variables” in addition to the normal connection weight updates. We also discuss the world of lifelong learning, which seeks to train networks in an online manner to improve on any tasks as they are introduced. Catastrophic forgetting is an obstacle in modern deep learning, where a network forgets old tasks when it is trained on new tasks. Lifelong learning strategies, like continual learning, transfer learning, and meta-learning seek to overcome catastrophic forgetting, and we talk about some of the inspirations from neuroscience being used to help lifelong learning in networks.

BI 122 Kohitij Kar: Visual Intelligence

BI 122 Kohitij Kar: Visual Intelligence

Brain Inspired
Brain Inspired
BI 122 Kohitij Kar: Visual Intelligence
Loading
/

Ko and I discuss a range of topics around his work to understand our visual intelligence. Ko was a postdoc in Jim Dicarlo’s lab, where he helped develop the convolutional neural network models that have become the standard for explaining core object recognition. He is starting his own lab at York University, where he will continue to expand and refine the models, adding important biological details and incorporating models for brain areas outside the ventral visual stream. He will also continue recording neural activity, and performing perturbation studies to better understand the networks involved in our visual cognition.

BI 121 Mac Shine: Systems Neurobiology

BI 121 Mac Shine: Systems Neurobiology

Brain Inspired
Brain Inspired
BI 121 Mac Shine: Systems Neurobiology
Loading
/

Mac and I discuss his systems level approach to understanding brains, and his theoretical work suggesting important roles for the thalamus, basal ganglia, and cerebellum, shifting the dynamical landscape of brain function within varying behavioral contexts. We also discuss his recent interest in the ascending arousal system and neuromodulators. Mac thinks the neocortex has been the sole focus of too much neuroscience research, and that the subcortical brain regions and circuits have a much larger role underlying our intelligence.

BI 120 James Fitzgerald, Andrew Saxe, Weinan Sun: Optimizing Memories

BI 120 James Fitzgerald, Andrew Saxe, Weinan Sun: Optimizing Memories

Brain Inspired
Brain Inspired
BI 120 James Fitzgerald, Andrew Saxe, Weinan Sun: Optimizing Memories
Loading
/

James, Andrew, and Weinan discuss their recent theory about how the brain might use complementary learning systems to optimize our memories. The idea is that our hippocampus creates our episodic memories for individual events, full of particular details. And through a complementary process, slowly consolidates those memories within our neocortex through mechanisms like hippocampal replay. The new idea in their work suggests a way for the consolidated cortical memory to become optimized for generalization, something humans are known to be capable of but deep learning has yet to build. We discuss what their theory predicts about how the “correct” process depends on how much noise and variability there is in the learning environment, how their model solves this, and how it relates to our brain and behavior.

BI 119 Henry Yin: The Crisis in Neuroscience

BI 119 Henry Yin: The Crisis in Neuroscience

Brain Inspired
Brain Inspired
BI 119 Henry Yin: The Crisis in Neuroscience
Loading
/

Henry and I discuss why he thinks neuroscience is in a crisis (in the Thomas Kuhn sense of scientific paradigms, crises, and revolutions). Henry thinks our current concept of the brain as an input-output device, with cognition in the middle, is mistaken. He points to the failure of neuroscience to successfully explain behavior despite decades of research. Instead, Henry proposes the brain is one big hierarchical set of control loops, trying to control their output with respect to internally generated reference signals. He was inspired by control theory, but points out that most control theory for biology is flawed by not recognizing that the reference signals are internally generated. Instead, most control theory approaches, and neuroscience research in general, assume the reference signals are what gets externally supplied… by the experimenter.

BI 118 Johannes Jäger: Beyond Networks

BI 118 Johannes Jäger: Beyond Networks

Brain Inspired
Brain Inspired
BI 118 Johannes Jäger: Beyond Networks
Loading
/

Johannes (Yogi) is a freelance philosopher, researcher & educator. We discuss many of the topics in his online course, Beyond Networks: The Evolution of Living Systems. The course is focused on the role of agency in evolution, but it covers a vast range of topics: process vs. substance metaphysics, causality, mechanistic dynamic explanation, teleology, the important role of development mediating genotypes, phenotypes, and evolution, what makes biological organisms unique, the history of evolutionary theory, scientific perspectivism, and a view toward the necessity of including agency in evolutionary theory. I highly recommend taking his course. We also discuss the role of agency in artificial intelligence, how neuroscience and evolutionary theory are undergoing parallel re-evaluations, and Yogi answers a guest question from Kevin Mitchell.

BI 117 Anil Seth: Being You

BI 117 Anil Seth: Being You

Brain Inspired
Brain Inspired
BI 117 Anil Seth: Being You
Loading
/

Anil and I discuss a range of topics from his book, BEING YOU A New Science of Consciousness. Anil lays out his framework for explaining consciousness, which is embedded in what he calls the “real problem” of consciousness. You know the “hard problem”, which was David Chalmers term for our eternal difficulties to explain why we have subjective awareness at all instead of being unfeeling, unexperiencing machine-like organisms. Anil’s “real problem” aims to explain, predict, and control the phenomenal properties of consciousness, and his hope is that, by doing so, the hard problem of consciousness will dissolve much like the mystery of explaining life dissolved with lots of good science.

BI 116 Michael W. Cole: Empirical Neural Networks

BI 116 Michael W. Cole: Empirical Neural Networks

Brain Inspired
Brain Inspired
BI 116 Michael W. Cole: Empirical Neural Networks
Loading
/

Mike and I discuss his modeling approach to study cognition. Many people I have on the podcast use deep neural networks to study brains, where the idea is to train or optimize the model to perform a task, then compare the model properties with brain properties. Mike’s approach is different in at least two ways. For one, he builds the architecture of his models using structural connectivity data from fMRI recordings. Two, he doesn’t train his models; instead, he uses functional connectivity data from the fMRI recordings to assign weights between nodes of the network (in deep learning, the weights are learned through lots of training). Mike calls his networks empirically-estimated neural networks (ENNs), and/or network coding models. We walk through his approach, what we can learn from models like ENNs, discuss some of his earlier work on cognitive control and our ability to flexibly adapt to new task rules through instruction, and he fields questions from Kanaka Rajan, Kendrick Kay, and Patryk Laurent.

BI 115 Steve Grossberg: Conscious Mind, Resonant Brain

BI 115 Steve Grossberg: Conscious Mind, Resonant Brain

Brain Inspired
Brain Inspired
BI 115 Steve Grossberg: Conscious Mind, Resonant Brain
Loading
/

Steve and I discuss his book Conscious Mind, Resonant Brain: How Each Brain Makes a Mind.  The book is a huge collection of his models and their predictions and explanations for a wide array of cognitive brain functions. Many of the models spring from his Adaptive Resonance Theory (ART) framework, which explains how networks of neurons deal with changing environments while maintaining self-organization and retaining learned knowledge. ART led Steve to the hypothesis that all conscious states are resonant states, which we discuss. There are also guest questions from György Buzsáki, Jay McClelland, and John Krakauer.