All Episodes

BI 135 Elena Galea: The Stars of the Brain

BI 135 Elena Galea: The Stars of the Brain

Brain Inspired
Brain Inspired
BI 135 Elena Galea: The Stars of the Brain
Loading
/

Brains are often conceived as consisting of neurons and “everything else.” As Elena discusses, the “everything else,” including glial cells and in particular astrocytes, have largely been ignored in neuroscience. That’s partly because the fast action potentials of neurons have been assumed to underlie computations in the brain, and because technology only recently afforded closer scrutiny of astrocyte activity. Now that we can record calcium signaling in astrocytes, it’s possible to relate how astrocyte signaling with each other and with neurons may complement the cognitive roles once thought the sole domain of neurons. Although the computational role of astrocytes remains unclear, it is clear that astrocytes interact with neurons and neural circuits in dynamic and interesting ways. We talk about the historical story of astrocytes, the emerging modern story, and Elena shares her views on the path forward to understand astrocyte function in cognition, disease, homeostasis, and – Elena’s favorite current hypothesis – their integrative role in negative feedback control.

BI 134 Mandyam Srinivasan: Bee Flight and Cognition

BI 134 Mandyam Srinivasan: Bee Flight and Cognition

Brain Inspired
Brain Inspired
BI 134 Mandyam Srinivasan: Bee Flight and Cognition
Loading
/

Srini is Emeritus Professor at Queensland Brain Institute in Australia. In this episode, he shares his wide range of behavioral experiments elucidating the principles of flight and navigation in insects. We discuss how bees use optic flow signals to determine their speed, distance, proximity to objects, and to gracefully land. These abilities are largely governed via control systems, balancing incoming perceptual signals with internal reference signals. We also talk about a few of the aerial robotics projects his research has inspired, many of the other cognitive skills bees can learn, the possibility of their feeling pain , and the nature of their possible subjective conscious experience.

BI 133 Ken Paller: Lucid Dreaming, Memory, and Sleep

BI 133 Ken Paller: Lucid Dreaming, Memory, and Sleep

Brain Inspired
Brain Inspired
BI 133 Ken Paller: Lucid Dreaming, Memory, and Sleep
Loading
/

communication with subjects while they experience lucid dreams. This new paradigm opens many avenues to study the neuroscience and psychology of sleep, dreams, memory, and learning, and to the improvement and optimization of sleep for cognition. Ken and his team are developing a Lucid Dreaming App which is freely available via his lab. We also discuss much of his work on memory and learning in general and specifically related to sleep, like reactivating specific memories during sleep to improve learning.

BI 132 Ila Fiete: A Grid Scaffold for Memory

BI 132 Ila Fiete: A Grid Scaffold for Memory

Brain Inspired
Brain Inspired
BI 132 Ila Fiete: A Grid Scaffold for Memory
Loading
/

Ila discusses her theoretical neuroscience work suggesting how our memories are formed within the cognitive maps we use to navigate the world and navigate our thoughts. The main idea is that grid cell networks in the entorhinal cortex internally generate a structured scaffold, which gets sent to the hippocampus. Neurons in the hippocampus, like the well-known place cells, receive that scaffolding and also receive external signals from the neocortex- signals about what’s happening in the world and in our thoughts. Thus, the place cells act to “pin” what’s happening in our neocortex to the scaffold, forming a memory. We also discuss her background as a physicist and her approach as a “neurophysicist”, and a review she’s publishing all about the many brain areas and cognitive functions being explained as attractor landscapes within a dynamical systems framework.

BI 131 Sri Ramaswamy and Jie Mei: Neuromodulation-aware DNNs

BI 131 Sri Ramaswamy and Jie Mei: Neuromodulation-aware DNNs

Brain Inspired
Brain Inspired
BI 131 Sri Ramaswamy and Jie Mei: Neuromodulation-aware DNNs
Loading
/

Sri and Mei join me to discuss how including principles of neuromodulation in deep learning networks may improve network performance. It’s an ever-present question how much detail to include in models, and we are in the early stages of learning how neuromodulators and their interactions shape biological brain function. But as we continue to learn more, Sri and Mei are interested in building “neuromodulation-aware DNNs”.

BI 130 Eve Marder: Modulation of Networks

BI 130 Eve Marder: Modulation of Networks

Brain Inspired
Brain Inspired
BI 130 Eve Marder: Modulation of Networks
Loading
/

Eve discusses many of the lessons she has learned studying a small nervous system, the crustacean stomatogastric nervous system (STG). The STG has only about 30 neurons and its connections and neurophysiology are well-understood. Yet Eve’s work has shown it functions under a remarkable diversity of conditions, and does so is a remarkable variety of ways. We discuss her work on the STG specifically, and what her work implies about trying to study much larger nervous systems, like our human brains.

BI 129 Patryk Laurent: Learning from the Real World

BI 129 Patryk Laurent: Learning from the Real World

Brain Inspired
Brain Inspired
BI 129 Patryk Laurent: Learning from the Real World
Loading
/

Patryk and I discuss his wide-ranging background working in both the neuroscience and AI worlds, and his resultant perspective on what’s needed to move forward in AI, including some principles of brain processes that are more and less important. We also discuss his own work using some of those principles to help deep learning generalize to better capture how humans behave in and perceive the world.

BI 128 Hakwan Lau: In Consciousness We Trust

BI 128 Hakwan Lau: In Consciousness We Trust

Brain Inspired
Brain Inspired
BI 128 Hakwan Lau: In Consciousness We Trust
Loading
/

Hakwan and I discuss many of the topics in his new book, In Consciousness we Trust: The Cognitive Neuroscience of Subjective Experience. Hakwan describes his perceptual reality monitoring theory of consciousness, which suggests consciousness may act as a systems check between our sensory perceptions and higher cognitive functions. We also discuss his latest thoughts on mental quality space and how it relates to perceptual reality monitoring. Among many other topics, we chat about the many confounds and challenges to empirically studying consciousness, a topic featured heavily in the first half of his book. Hakwan was on a previous episode with Steve Fleming, BI 099 Hakwan Lau and Steve Fleming: Neuro-AI Consciousness.

BI 127 Tomás Ryan: Memory, Instinct, and Forgetting

BI 127 Tomás Ryan: Memory, Instinct, and Forgetting

Brain Inspired
Brain Inspired
BI 127 Tomás Ryan: Memory, Instinct, and Forgetting
Loading
/

Tomás and I discuss his research and ideas on how memories are encoded (the engram), the role of forgetting, and the overlapping mechanisms of memory and instinct. Tomás uses otpogenetics and other techniques to label and control neurons involved in learning and memory, and has shown that forgotten memories can be restored by stimulating “engram cells” originally associated with the forgotten memory. This line of research has led Tomás to think forgetting might be a learning mechanism itself, a adaption our brains make based on the predictability and affordances of the environment. His work on engrams has also led Tomás to think our instincts (ingrams) may share the same mechanism of our memories (engrams), and that memories may transition to instincts across generations.

BI 126 Randy Gallistel: Where Is the Engram?

BI 126 Randy Gallistel: Where Is the Engram?

Brain Inspired
Brain Inspired
BI 126 Randy Gallistel: Where Is the Engram?
Loading
/

Randy and I discuss his long-standing interest in how the brain stores information to compute. That is, where is the engram, the physical trace of memory in the brain? Modern neuroscience is dominated by the view that memories are stored among synaptic connections in populations of neurons. Randy believes a more reasonable and reliable way to store abstract symbols, like numbers, is to write them into code within individual neurons. Thus, the spiking code, whatever it is, functions to write and read memories into and out of intracellular substrates, like polynucleotides (DNA, RNA, e.g.). He lays out his case in detail in his book with Adam King, Memory and the Computational Brain: Why Cognitive Science will Transform Neuroscience. We also talk about some research and theoretical work since then that support his views.

BI 125 Doris Tsao, Tony Zador, Blake Richards: NAISys

BI 125 Doris Tsao, Tony Zador, Blake Richards: NAISys

Brain Inspired
Brain Inspired
BI 125 Doris Tsao, Tony Zador, Blake Richards: NAISys
Loading
/

Doris, Tony, and Blake are the organizers for this year’s NAISys conference, From Neuroscience to Artificially Intelligent Systems (NAISys), at Cold Spring Harbor. We discuss the conference itself, some history of the neuroscience and AI interface, their current research interests, and a handful of topics around evolution, innateness, development, learning, and the current and future prospects for using neuroscience to inspire new ideas in artificial intelligence.

BI 124 Peter Robin Hiesinger: The Self-Assembling Brain

BI 124 Peter Robin Hiesinger: The Self-Assembling Brain

Brain Inspired
Brain Inspired
BI 124 Peter Robin Hiesinger: The Self-Assembling Brain
Loading
/

Robin and I discuss many of the ideas in his book The Self-Assembling Brain: How Neural Networks Grow Smarter. The premise is that our DNA encodes an algorithmic growth process that unfolds information via time and energy, resulting in a connected neural network (our brains!) imbued with vast amounts of information from the “start”. This contrasts with modern deep learning networks, which start with minimal initial information in their connectivity, and instead rely almost solely on learning to gain their function. Robin suggests we won’t be able to create anything with close to human-like intelligence unless we build in an algorithmic growth process and an evolutionary selection process to create artificial networks.

BI 123 Irina Rish: Continual Learning

BI 123 Irina Rish: Continual Learning

Brain Inspired
Brain Inspired
BI 123 Irina Rish: Continual Learning
Loading
/

Irina is a faculty member at MILA-Quebec AI Institute and a professor at Université de Montréal. She has worked from both ends of the neuroscience/AI interface, using AI for neuroscience applications, and using neural principles to help improve AI. We discuss her work on biologically-plausible alternatives to back-propagation, using “auxiliary variables” in addition to the normal connection weight updates. We also discuss the world of lifelong learning, which seeks to train networks in an online manner to improve on any tasks as they are introduced. Catastrophic forgetting is an obstacle in modern deep learning, where a network forgets old tasks when it is trained on new tasks. Lifelong learning strategies, like continual learning, transfer learning, and meta-learning seek to overcome catastrophic forgetting, and we talk about some of the inspirations from neuroscience being used to help lifelong learning in networks.

BI 122 Kohitij Kar: Visual Intelligence

BI 122 Kohitij Kar: Visual Intelligence

Brain Inspired
Brain Inspired
BI 122 Kohitij Kar: Visual Intelligence
Loading
/

Ko and I discuss a range of topics around his work to understand our visual intelligence. Ko was a postdoc in Jim Dicarlo’s lab, where he helped develop the convolutional neural network models that have become the standard for explaining core object recognition. He is starting his own lab at York University, where he will continue to expand and refine the models, adding important biological details and incorporating models for brain areas outside the ventral visual stream. He will also continue recording neural activity, and performing perturbation studies to better understand the networks involved in our visual cognition.

BI 121 Mac Shine: Systems Neurobiology

BI 121 Mac Shine: Systems Neurobiology

Brain Inspired
Brain Inspired
BI 121 Mac Shine: Systems Neurobiology
Loading
/

Mac and I discuss his systems level approach to understanding brains, and his theoretical work suggesting important roles for the thalamus, basal ganglia, and cerebellum, shifting the dynamical landscape of brain function within varying behavioral contexts. We also discuss his recent interest in the ascending arousal system and neuromodulators. Mac thinks the neocortex has been the sole focus of too much neuroscience research, and that the subcortical brain regions and circuits have a much larger role underlying our intelligence.

BI 120 James Fitzgerald, Andrew Saxe, Weinan Sun: Optimizing Memories

BI 120 James Fitzgerald, Andrew Saxe, Weinan Sun: Optimizing Memories

Brain Inspired
Brain Inspired
BI 120 James Fitzgerald, Andrew Saxe, Weinan Sun: Optimizing Memories
Loading
/

James, Andrew, and Weinan discuss their recent theory about how the brain might use complementary learning systems to optimize our memories. The idea is that our hippocampus creates our episodic memories for individual events, full of particular details. And through a complementary process, slowly consolidates those memories within our neocortex through mechanisms like hippocampal replay. The new idea in their work suggests a way for the consolidated cortical memory to become optimized for generalization, something humans are known to be capable of but deep learning has yet to build. We discuss what their theory predicts about how the “correct” process depends on how much noise and variability there is in the learning environment, how their model solves this, and how it relates to our brain and behavior.

BI 119 Henry Yin: The Crisis in Neuroscience

BI 119 Henry Yin: The Crisis in Neuroscience

Brain Inspired
Brain Inspired
BI 119 Henry Yin: The Crisis in Neuroscience
Loading
/

Henry and I discuss why he thinks neuroscience is in a crisis (in the Thomas Kuhn sense of scientific paradigms, crises, and revolutions). Henry thinks our current concept of the brain as an input-output device, with cognition in the middle, is mistaken. He points to the failure of neuroscience to successfully explain behavior despite decades of research. Instead, Henry proposes the brain is one big hierarchical set of control loops, trying to control their output with respect to internally generated reference signals. He was inspired by control theory, but points out that most control theory for biology is flawed by not recognizing that the reference signals are internally generated. Instead, most control theory approaches, and neuroscience research in general, assume the reference signals are what gets externally supplied… by the experimenter.

BI 118 Johannes Jäger: Beyond Networks

BI 118 Johannes Jäger: Beyond Networks

Brain Inspired
Brain Inspired
BI 118 Johannes Jäger: Beyond Networks
Loading
/

Johannes (Yogi) is a freelance philosopher, researcher & educator. We discuss many of the topics in his online course, Beyond Networks: The Evolution of Living Systems. The course is focused on the role of agency in evolution, but it covers a vast range of topics: process vs. substance metaphysics, causality, mechanistic dynamic explanation, teleology, the important role of development mediating genotypes, phenotypes, and evolution, what makes biological organisms unique, the history of evolutionary theory, scientific perspectivism, and a view toward the necessity of including agency in evolutionary theory. I highly recommend taking his course. We also discuss the role of agency in artificial intelligence, how neuroscience and evolutionary theory are undergoing parallel re-evaluations, and Yogi answers a guest question from Kevin Mitchell.