All Episodes

BI 144 Emily M. Bender and Ev Fedorenko: Large Language Models

BI 144 Emily M. Bender and Ev Fedorenko: Large Language Models

Brain Inspired
Brain Inspired
BI 144 Emily M. Bender and Ev Fedorenko: Large Language Models
Loading
/

Large language models, often now called “foundation models”, are the model de jour in AI, based on the transformer architecture. In this episode, I bring together Evelina Fedorenko and Emily M. Bender to discuss how language models stack up to our own language processing and generation (models and brains both excel at next-word prediction), whether language evolved in humans for complex thoughts or for communication (communication, says Ev), whether language models grasp the meaning of the text they produce (Emily says no), and much more.

BI 143 Rodolphe Sepulchre: Mixed Feedback Control

BI 143 Rodolphe Sepulchre: Mixed Feedback Control

Brain Inspired
Brain Inspired
BI 143 Rodolphe Sepulchre: Mixed Feedback Control
Loading
/

Rodolphe Sepulchre is a control engineer and theorist at Cambridge University. He focuses on applying feedback control engineering principles to build circuits that model neurons and neuronal circuits. We discuss his work on mixed feedback control – positive and negative – as an underlying principle of the mixed digital and analog brain signals,, the role of neuromodulation as a controller, applying these principles to Eve Marder’s lobster/crab neural circuits, building mixed-feedback neuromorphics, some feedback control history, and how “If you wish to contribute original work, be prepared to face loneliness,” among other topics

BI 142 Cameron Buckner: The New DoGMA

BI 142 Cameron Buckner: The New DoGMA

Brain Inspired
Brain Inspired
BI 142 Cameron Buckner: The New DoGMA
Loading
/

Cameron Buckner is a philosopher and cognitive scientist at The University of Houston. He is writing a book about the age-old philosophical debate on how much of our knowledge is innate (nature, rationalism) versus how much is learned (nurture, empiricism). In the book and his other works, Cameron argues that modern AI can help settle the debate. In particular, he suggests we focus on what types of psychological “domain-general faculties” underlie our own intelligence, and how different kinds of deep learning models are revealing how those faculties may be implemented in our brains. The hope is that by building systems that possess the right handful of faculties, and putting those systems together in a way they can cooperate in a general and flexible manner, it will result in cognitive architectures we would call intelligent. Thus, what Cameron calls The New DoGMA: Domain-General Modular Architecture. We also discuss his work on mental representation and how representations get their content – how our thoughts connect to the natural external world.

BI 141 Carina Curto: From Structure to Dynamics

BI 141 Carina Curto: From Structure to Dynamics

Brain Inspired
Brain Inspired
BI 141 Carina Curto: From Structure to Dynamics
Loading
/

Carina Curto is a professor in the Department of Mathematics at The Pennsylvania State University. She uses her background skills in mathematical physics/string theory to study networks of neurons. On this episode, we discuss the world of topology in neuroscience – the study of the geometrical structures mapped out by active populations of neurons. We also discuss her work on “combinatorial linear threshold networks” (CLTNs). Unlike the large deep learning models popular today as models of brain activity, the CLTNs Carina builds are relatively simple, abstracted graphical models. This property is important to Carina, whose goal is to develop mathematically tractable neural network models. Carina has worked out how the structure of many CLTNs allows prediction of the model’s allowable dynamics, how motifs of model structure can be embedded in larger models while retaining their dynamical features, and more. The hope is that these elegant models can tell us more about the principles our messy brains employ to generate the robust and beautiful dynamics underlying our cognition.

BI 140 Jeff Schall: Decisions and Eye Movements

BI 140 Jeff Schall: Decisions and Eye Movements

Brain Inspired
Brain Inspired
BI 140 Jeff Schall: Decisions and Eye Movements
Loading
/

Jeff Schall is the director of the Center for Visual Neurophysiology at York University, where he runs the Schall Lab. His research centers around studying the mechanisms of our decisions, choices, movement control, and attention within the saccadic eye movement brain systems and in mathematical psychology models- in other words, how we decide where and when to look. Jeff was my postdoctoral advisor at Vanderbilt University, and I wanted to revisit a few guiding principles he instills in all his students. Linking Propositions by Davida Teller are a series of logical statements to ensure we rigorously connect the brain activity we record to the psychological functions we want to explain. Strong Inference by John Platt is the scientific method on steroids – a way to make our scientific practice most productive and efficient. We discuss both of these topics in the context of Jeff’s eye movement and decision-making science. We also discuss how neurophysiology has changed over the past 30 years, we compare the relatively small models he employs with the huge deep learning models, many of his current projects, and plenty more. If you want to learn more about Jeff’s work and approach, I recommend reading in order two of his review papers we discuss as well. One was written 20 years ago (On Building a Bridge Between Brain and Behavior), and the other 2-ish years ago (Accumulators, Neurons, and Response Time).

BI 139 Marc Howard: Compressed Time and Memory

BI 139 Marc Howard: Compressed Time and Memory

Brain Inspired
Brain Inspired
BI 139 Marc Howard: Compressed Time and Memory
Loading
/

Marc Howard runs his Theoretical Cognitive Neuroscience Lab at Boston University, where he develops mathematical models of cognition, constrained by psychological and neural data. In this episode, we discuss the idea that a Laplace transform and its inverse may serve as a unified framework for memory. In short, our memories are compressed on a continuous log-scale: as memories get older, their representations “spread out” in time. It turns out this kind of representation seems ubiquitous in the brain and across cognitive functions, suggesting it is likely a canonical computation our brains use to represent a wide variety of cognitive functions. We also discuss some of the ways Marc is incorporating this mathematical operation in deep learning nets to improve their ability to handle information at different time scales.

BI 138 Matthew Larkum: The Dendrite Hypothesis

BI 138 Matthew Larkum: The Dendrite Hypothesis

Brain Inspired
Brain Inspired
BI 138 Matthew Larkum: The Dendrite Hypothesis
Loading
/

Matthew Larkum runs his lab at Humboldt University of Berlin, where his group studies how dendrites contribute to  computations within and across layers of the neocortex. Since the late 1990s, Matthew has continued to uncover key properties of the way pyramidal neurons stretch across layers of the cortex, their dendrites receiving inputs from those different layers – and thus different brain areas. For example, layer 5 pyramidal neurons have a set of basal dendrites near the cell body that receives feedforward-like input, and a set of apical dendrites all the way up in layer 1 that receives feedback–like input. Depending on which set of dendrites is receiving input, or neither or both, the neuron’s output functions in different modes- silent, regular spiking, or burst spiking. Matthew realized the different sets of dendritic inputs could signal different operations, often pairing feedforward sensory–like signals and feedback context-like signals. His research has shown this kind of coincidence detection is important for cognitive functions like perception, memory, learning, and even wakefulness. We discuss many of his ideas and research findings, why dendrites have long been neglected in favor of neuron cell bodies, the possibility of learning about computations by studying implementation-level phenomena, and much more.

BI 137 Brian Butterworth: Can Fish Count?

BI 137 Brian Butterworth: Can Fish Count?

Brain Inspired
Brain Inspired
BI 137 Brian Butterworth: Can Fish Count?
Loading
/

Brian Butterworth is Emeritus Professor of Cognitive Neuropsychology at University College London. In his book, Can Fish Count?: What Animals Reveal About Our Uniquely Mathematical Minds, he describes the counting and numerical abilities across many different species, suggesting our ability to count is evolutionarily very old (since many diverse species can count). We discuss many of the examples in his book, the mathematical disability dyscalculia and its relation to dyslexia, how to test counting abilities in various species, how counting may happen in brains, the promise of creating artificial networks that can do math, and many more topics.

BI 136 Michel Bitbol and Alex Gomez-Marin: Phenomenology

BI 136 Michel Bitbol and Alex Gomez-Marin: Phenomenology

Brain Inspired
Brain Inspired
BI 136 Michel Bitbol and Alex Gomez-Marin: Phenomenology
Loading
/

We discuss phenomenology as an alternative perspective on our scientific endeavors. Although we like to believe our science is objective and explains the reality of the world we inhabit, we can’t escape the fact that all of our scientific knowledge comes through our perceptions and interpretations as conscious living beings. Michel has used phenomenology to resolve many of the paradoxes that quantum mechanics generates when it is understood as a description of reality, and more recently he has applied phenomenology to the philosophy of mind and consciousness. Alex is currently trying to apply the phenomenological approach to his research on brains and behavior. Much of our conversation revolves around how phenomenology and our “normal” scientific explorations can co-exist, including the study of minds, brains, and intelligence- our own and that of other organisms. We also discuss the “blind spot” of science, the history and practice of phenomenology, various kinds of explanation, the language we use to describe things, and more.

BI 135 Elena Galea: The Stars of the Brain

BI 135 Elena Galea: The Stars of the Brain

Brain Inspired
Brain Inspired
BI 135 Elena Galea: The Stars of the Brain
Loading
/

Brains are often conceived as consisting of neurons and “everything else.” As Elena discusses, the “everything else,” including glial cells and in particular astrocytes, have largely been ignored in neuroscience. That’s partly because the fast action potentials of neurons have been assumed to underlie computations in the brain, and because technology only recently afforded closer scrutiny of astrocyte activity. Now that we can record calcium signaling in astrocytes, it’s possible to relate how astrocyte signaling with each other and with neurons may complement the cognitive roles once thought the sole domain of neurons. Although the computational role of astrocytes remains unclear, it is clear that astrocytes interact with neurons and neural circuits in dynamic and interesting ways. We talk about the historical story of astrocytes, the emerging modern story, and Elena shares her views on the path forward to understand astrocyte function in cognition, disease, homeostasis, and – Elena’s favorite current hypothesis – their integrative role in negative feedback control.

BI 134 Mandyam Srinivasan: Bee Flight and Cognition

BI 134 Mandyam Srinivasan: Bee Flight and Cognition

Brain Inspired
Brain Inspired
BI 134 Mandyam Srinivasan: Bee Flight and Cognition
Loading
/

Srini is Emeritus Professor at Queensland Brain Institute in Australia. In this episode, he shares his wide range of behavioral experiments elucidating the principles of flight and navigation in insects. We discuss how bees use optic flow signals to determine their speed, distance, proximity to objects, and to gracefully land. These abilities are largely governed via control systems, balancing incoming perceptual signals with internal reference signals. We also talk about a few of the aerial robotics projects his research has inspired, many of the other cognitive skills bees can learn, the possibility of their feeling pain , and the nature of their possible subjective conscious experience.

BI 133 Ken Paller: Lucid Dreaming, Memory, and Sleep

BI 133 Ken Paller: Lucid Dreaming, Memory, and Sleep

Brain Inspired
Brain Inspired
BI 133 Ken Paller: Lucid Dreaming, Memory, and Sleep
Loading
/

communication with subjects while they experience lucid dreams. This new paradigm opens many avenues to study the neuroscience and psychology of sleep, dreams, memory, and learning, and to the improvement and optimization of sleep for cognition. Ken and his team are developing a Lucid Dreaming App which is freely available via his lab. We also discuss much of his work on memory and learning in general and specifically related to sleep, like reactivating specific memories during sleep to improve learning.

BI 132 Ila Fiete: A Grid Scaffold for Memory

BI 132 Ila Fiete: A Grid Scaffold for Memory

Brain Inspired
Brain Inspired
BI 132 Ila Fiete: A Grid Scaffold for Memory
Loading
/

Ila discusses her theoretical neuroscience work suggesting how our memories are formed within the cognitive maps we use to navigate the world and navigate our thoughts. The main idea is that grid cell networks in the entorhinal cortex internally generate a structured scaffold, which gets sent to the hippocampus. Neurons in the hippocampus, like the well-known place cells, receive that scaffolding and also receive external signals from the neocortex- signals about what’s happening in the world and in our thoughts. Thus, the place cells act to “pin” what’s happening in our neocortex to the scaffold, forming a memory. We also discuss her background as a physicist and her approach as a “neurophysicist”, and a review she’s publishing all about the many brain areas and cognitive functions being explained as attractor landscapes within a dynamical systems framework.

BI 131 Sri Ramaswamy and Jie Mei: Neuromodulation-aware DNNs

BI 131 Sri Ramaswamy and Jie Mei: Neuromodulation-aware DNNs

Brain Inspired
Brain Inspired
BI 131 Sri Ramaswamy and Jie Mei: Neuromodulation-aware DNNs
Loading
/

Sri and Mei join me to discuss how including principles of neuromodulation in deep learning networks may improve network performance. It’s an ever-present question how much detail to include in models, and we are in the early stages of learning how neuromodulators and their interactions shape biological brain function. But as we continue to learn more, Sri and Mei are interested in building “neuromodulation-aware DNNs”.

BI 130 Eve Marder: Modulation of Networks

BI 130 Eve Marder: Modulation of Networks

Brain Inspired
Brain Inspired
BI 130 Eve Marder: Modulation of Networks
Loading
/

Eve discusses many of the lessons she has learned studying a small nervous system, the crustacean stomatogastric nervous system (STG). The STG has only about 30 neurons and its connections and neurophysiology are well-understood. Yet Eve’s work has shown it functions under a remarkable diversity of conditions, and does so is a remarkable variety of ways. We discuss her work on the STG specifically, and what her work implies about trying to study much larger nervous systems, like our human brains.

BI 129 Patryk Laurent: Learning from the Real World

BI 129 Patryk Laurent: Learning from the Real World

Brain Inspired
Brain Inspired
BI 129 Patryk Laurent: Learning from the Real World
Loading
/

Patryk and I discuss his wide-ranging background working in both the neuroscience and AI worlds, and his resultant perspective on what’s needed to move forward in AI, including some principles of brain processes that are more and less important. We also discuss his own work using some of those principles to help deep learning generalize to better capture how humans behave in and perceive the world.

BI 128 Hakwan Lau: In Consciousness We Trust

BI 128 Hakwan Lau: In Consciousness We Trust

Brain Inspired
Brain Inspired
BI 128 Hakwan Lau: In Consciousness We Trust
Loading
/

Hakwan and I discuss many of the topics in his new book, In Consciousness we Trust: The Cognitive Neuroscience of Subjective Experience. Hakwan describes his perceptual reality monitoring theory of consciousness, which suggests consciousness may act as a systems check between our sensory perceptions and higher cognitive functions. We also discuss his latest thoughts on mental quality space and how it relates to perceptual reality monitoring. Among many other topics, we chat about the many confounds and challenges to empirically studying consciousness, a topic featured heavily in the first half of his book. Hakwan was on a previous episode with Steve Fleming, BI 099 Hakwan Lau and Steve Fleming: Neuro-AI Consciousness.

BI 127 Tomás Ryan: Memory, Instinct, and Forgetting

BI 127 Tomás Ryan: Memory, Instinct, and Forgetting

Brain Inspired
Brain Inspired
BI 127 Tomás Ryan: Memory, Instinct, and Forgetting
Loading
/

Tomás and I discuss his research and ideas on how memories are encoded (the engram), the role of forgetting, and the overlapping mechanisms of memory and instinct. Tomás uses otpogenetics and other techniques to label and control neurons involved in learning and memory, and has shown that forgotten memories can be restored by stimulating “engram cells” originally associated with the forgotten memory. This line of research has led Tomás to think forgetting might be a learning mechanism itself, a adaption our brains make based on the predictability and affordances of the environment. His work on engrams has also led Tomás to think our instincts (ingrams) may share the same mechanism of our memories (engrams), and that memories may transition to instincts across generations.