All Episodes

BI 154 Anne Collins: Learning with Working Memory

BI 154 Anne Collins: Learning with Working Memory

Brain Inspired
Brain Inspired
BI 154 Anne Collins: Learning with Working Memory
/

Anne Collins runs her  Computational Cognitive Neuroscience Lab at the University of California, Berkley One of the things she’s been working on for years is how our working memory plays a role in learning as well, and specifically how working memory and reinforcement learning interact to affect how we learn, depending on the nature of what we’re trying to learn. We discuss that interaction specifically. We also discuss more broadly how segregated and how overlapping and interacting our cognitive functions are, what that implies about our natural tendency to think in dichotomies – like MF vs MB-RL, system-1 vs system-2, etc., and we dive into plenty other subjects, like how to possibly incorporate these ideas into AI.

BI 153 Carolyn Jennings: Attention and the Self

BI 153 Carolyn Jennings: Attention and the Self

Brain Inspired
Brain Inspired
BI 153 Carolyn Jennings: Attention and the Self
/

Carolyn Dicey Jennings is a philosopher and a cognitive scientist at University of California, Merced. In her book The Attending Mind, she lays out an attempt to unify the concept of attention. Carolyn defines attention roughly as the mental prioritization of some stuff over other stuff based on our collective interests. And one of her main claims is that attention is evidence of a real, emergent self or subject, that can’t be reduced to microscopic brain activity. She does connect attention to more macroscopic brain activity, suggesting slow longer-range oscillations in our brains can alter or entrain the activity of more local neural activity, and this is a candidate for mental causation. We unpack that more in our discussion, and how Carolyn situates attention among other cognitive functions, like consciousness, action, and perception.

BI 152 Michael L. Anderson: After Phrenology: Neural Reuse

BI 152 Michael L. Anderson: After Phrenology: Neural Reuse

Brain Inspired
Brain Inspired
BI 152 Michael L. Anderson: After Phrenology: Neural Reuse
/

Michael L. Anderson is a professor at the Rotman Institute of Philosophy, at Western University. His book, After Phrenology: Neural Reuse and the Interactive Brain, calls for a re-conceptualization of how we understand and study brains and minds. Neural reuse is the phenomenon that any given brain area is active for multiple cognitive functions, and partners with different sets of brain areas to carry out different cognitive functions. We discuss the implications for this, and other topics in Michael’s research and the book, like evolution, embodied cognition, and Gibsonian perception. Michael also fields guest questions from John Krakauer and Alex Gomez-Marin, about representations and metaphysics, respectively.

BI 151 Steve Byrnes: Brain-like AGI Safety

BI 151 Steve Byrnes: Brain-like AGI Safety

Brain Inspired
Brain Inspired
BI 151 Steve Byrnes: Brain-like AGI Safety
/

Steve Byrnes is a physicist turned AGI safety researcher. He’s concerned that when we create AGI, whenever and however that might happen, we run the risk of creating it in a less than perfectly safe way. AGI safety (AGI not doing something bad) is a wide net that encompasses AGI alignment (AGI doing what we want it to do). We discuss a host of ideas Steve writes about in his Intro to Brain-Like-AGI Safety blog series, which uses what he has learned about brains to address how we might safely make AGI.

BI 150 Dan Nicholson: Machines, Organisms, Processes

BI 150 Dan Nicholson: Machines, Organisms, Processes

Brain Inspired
Brain Inspired
BI 150 Dan Nicholson: Machines, Organisms, Processes
/

Dan Nicholson is a philosopher at George Mason University. He incorporates the history of science and philosophy into modern analyses of our conceptions of processes related to life and organisms. He is also interested in re-orienting our conception of the universe as made fundamentally of things/substances, and replacing it with the idea the universe is made fundamentally of processes (process philosophy). In this episode, we both of those subjects, the why the “machine conception of the organism” is incorrect, how to apply these ideas to topics like neuroscience and artificial intelligence, and much more.

BI 149 William B. Miller: Cell Intelligence

BI 149 William B. Miller: Cell Intelligence

Brain Inspired
Brain Inspired
BI 149 William B. Miller: Cell Intelligence
/

William B. Miller is an ex-physician turned evolutionary biologist. In this episode, we discuss topics related to his new book, Bioverse: How the Cellular World Contains the Secrets to Life’s Biggest Questions. The premise of the book is that all individual cells are intelligent in their own right, and possess a sense of self. From this, Bill makes the case that cells cooperate with other cells to engineer whole organisms that in turn serve as wonderful hosts for the myriad cell types. Further, our bodies are collections of our own cells (with our DNA), and an enormous amount and diversity of foreign cells – our microbiome – that communicate and cooperate with each other and with our own cells. We also discuss how cell intelligence compares to human intelligence, what Bill calls the “era of the cell” in science, how the future of medicine will harness the intelligence of cells and their cooperative nature, and much more.

BI 148 Gaute Einevoll: Brain Simulations

BI 148 Gaute Einevoll: Brain Simulations

Brain Inspired
Brain Inspired
BI 148 Gaute Einevoll: Brain Simulations
/

Gaute Einevoll is a professor at the University of Oslo and Norwegian University of Life Sciences. Use develops detailed models of brain networks to use as simulations, so neuroscientists can test their various theories and hypotheses about how networks implement various functions. Thus, the models are tools. The goal is to create models that are multi-level, to test questions at various levels of biological detail; and multi-modal, to predict that handful of signals neuroscientists measure from real brains (something Gaute calls “measurement physics”). We also discuss Gaute’s thoughts on Carina Curto’s “beautiful vs ugly models”, and his reaction to Noah Hutton’s In Silico documentary about the Blue Brain and Human Brain projects (Gaute has been funded by the Human Brain Project since its inception).

BI 147 Noah Hutton: In Silico

BI 147 Noah Hutton: In Silico

Brain Inspired
Brain Inspired
BI 147 Noah Hutton: In Silico
/

Noah Hutton writes, directs, and scores documentary and narrative films. On this episode, we discuss his documentary In Silico. In 2009, Noah watched a TED talk by Henry Markram, in which Henry claimed it would take 10 years to fully simulate a human brain. This claim inspired Noah to chronicle the project, visiting Henry and his team periodically throughout. The result was In Silico, which tells the science, human, and social story of Henry’s massively funded projects – the Blue Brain Project and the Human Brain Project.

BI 146 Lauren Ross: Causal and Non-Causal Explanation

BI 146 Lauren Ross: Causal and Non-Causal Explanation

Brain Inspired
Brain Inspired
BI 146 Lauren Ross: Causal and Non-Causal Explanation
/

Lauren Ross is an Associate Professor at the University of California, Irvine. She studies and writes about causal and non-causal explanations in philosophy of science, including distinctions among causal structures. Throughout her work, Lauren employs Jame’s Woodward’s interventionist approach to causation, which Jim and I discussed in episode 145. In this episode, we discuss Jim’s lasting impact on the philosophy of causation, the current dominance of mechanistic explanation and its relation to causation, and various causal structures of explanation, including pathways, cascades, topology, and constraints.

BI 145 James Woodward: Causation with a Human Face

BI 145 James Woodward: Causation with a Human Face

Brain Inspired
Brain Inspired
BI 145 James Woodward: Causation with a Human Face
/

James Woodward is a recently retired Professor from the Department of History and Philosophy of Science at the University of Pittsburgh. Jim has tremendously influenced the field of causal explanation in the philosophy of science. His account of causation centers around intervention – intervening on a cause should alter its effect. From this minimal notion, Jim has described many facets and varieties of causal structures. In this episode, we discuss topics from his recent book, Causation with a Human Face: Normative Theory and Descriptive Psychology. In the book, Jim advocates that how we should think about causality – the normative – needs to be studied together with how we actually do think about causal relations in the world – the descriptive. We discuss many topics around this central notion, epistemology versus metaphysics, the the nature and varieties of causal structures.

BI 144 Emily M. Bender and Ev Fedorenko: Large Language Models

BI 144 Emily M. Bender and Ev Fedorenko: Large Language Models

Brain Inspired
Brain Inspired
BI 144 Emily M. Bender and Ev Fedorenko: Large Language Models
/

Large language models, often now called “foundation models”, are the model de jour in AI, based on the transformer architecture. In this episode, I bring together Evelina Fedorenko and Emily M. Bender to discuss how language models stack up to our own language processing and generation (models and brains both excel at next-word prediction), whether language evolved in humans for complex thoughts or for communication (communication, says Ev), whether language models grasp the meaning of the text they produce (Emily says no), and much more.

BI 143 Rodolphe Sepulchre: Mixed Feedback Control

BI 143 Rodolphe Sepulchre: Mixed Feedback Control

Brain Inspired
Brain Inspired
BI 143 Rodolphe Sepulchre: Mixed Feedback Control
/

Rodolphe Sepulchre is a control engineer and theorist at Cambridge University. He focuses on applying feedback control engineering principles to build circuits that model neurons and neuronal circuits. We discuss his work on mixed feedback control – positive and negative – as an underlying principle of the mixed digital and analog brain signals,, the role of neuromodulation as a controller, applying these principles to Eve Marder’s lobster/crab neural circuits, building mixed-feedback neuromorphics, some feedback control history, and how “If you wish to contribute original work, be prepared to face loneliness,” among other topics

BI 142 Cameron Buckner: The New DoGMA

BI 142 Cameron Buckner: The New DoGMA

Brain Inspired
Brain Inspired
BI 142 Cameron Buckner: The New DoGMA
/

Cameron Buckner is a philosopher and cognitive scientist at The University of Houston. He is writing a book about the age-old philosophical debate on how much of our knowledge is innate (nature, rationalism) versus how much is learned (nurture, empiricism). In the book and his other works, Cameron argues that modern AI can help settle the debate. In particular, he suggests we focus on what types of psychological “domain-general faculties” underlie our own intelligence, and how different kinds of deep learning models are revealing how those faculties may be implemented in our brains. The hope is that by building systems that possess the right handful of faculties, and putting those systems together in a way they can cooperate in a general and flexible manner, it will result in cognitive architectures we would call intelligent. Thus, what Cameron calls The New DoGMA: Domain-General Modular Architecture. We also discuss his work on mental representation and how representations get their content – how our thoughts connect to the natural external world.

BI 141 Carina Curto: From Structure to Dynamics

BI 141 Carina Curto: From Structure to Dynamics

Brain Inspired
Brain Inspired
BI 141 Carina Curto: From Structure to Dynamics
/

Carina Curto is a professor in the Department of Mathematics at The Pennsylvania State University. She uses her background skills in mathematical physics/string theory to study networks of neurons. On this episode, we discuss the world of topology in neuroscience – the study of the geometrical structures mapped out by active populations of neurons. We also discuss her work on “combinatorial linear threshold networks” (CLTNs). Unlike the large deep learning models popular today as models of brain activity, the CLTNs Carina builds are relatively simple, abstracted graphical models. This property is important to Carina, whose goal is to develop mathematically tractable neural network models. Carina has worked out how the structure of many CLTNs allows prediction of the model’s allowable dynamics, how motifs of model structure can be embedded in larger models while retaining their dynamical features, and more. The hope is that these elegant models can tell us more about the principles our messy brains employ to generate the robust and beautiful dynamics underlying our cognition.

BI 140 Jeff Schall: Decisions and Eye Movements

BI 140 Jeff Schall: Decisions and Eye Movements

Brain Inspired
Brain Inspired
BI 140 Jeff Schall: Decisions and Eye Movements
/

Jeff Schall is the director of the Center for Visual Neurophysiology at York University, where he runs the Schall Lab. His research centers around studying the mechanisms of our decisions, choices, movement control, and attention within the saccadic eye movement brain systems and in mathematical psychology models- in other words, how we decide where and when to look. Jeff was my postdoctoral advisor at Vanderbilt University, and I wanted to revisit a few guiding principles he instills in all his students. Linking Propositions by Davida Teller are a series of logical statements to ensure we rigorously connect the brain activity we record to the psychological functions we want to explain. Strong Inference by John Platt is the scientific method on steroids – a way to make our scientific practice most productive and efficient. We discuss both of these topics in the context of Jeff’s eye movement and decision-making science. We also discuss how neurophysiology has changed over the past 30 years, we compare the relatively small models he employs with the huge deep learning models, many of his current projects, and plenty more. If you want to learn more about Jeff’s work and approach, I recommend reading in order two of his review papers we discuss as well. One was written 20 years ago (On Building a Bridge Between Brain and Behavior), and the other 2-ish years ago (Accumulators, Neurons, and Response Time).

BI 139 Marc Howard: Compressed Time and Memory

BI 139 Marc Howard: Compressed Time and Memory

Brain Inspired
Brain Inspired
BI 139 Marc Howard: Compressed Time and Memory
/

Marc Howard runs his Theoretical Cognitive Neuroscience Lab at Boston University, where he develops mathematical models of cognition, constrained by psychological and neural data. In this episode, we discuss the idea that a Laplace transform and its inverse may serve as a unified framework for memory. In short, our memories are compressed on a continuous log-scale: as memories get older, their representations “spread out” in time. It turns out this kind of representation seems ubiquitous in the brain and across cognitive functions, suggesting it is likely a canonical computation our brains use to represent a wide variety of cognitive functions. We also discuss some of the ways Marc is incorporating this mathematical operation in deep learning nets to improve their ability to handle information at different time scales.

BI 138 Matthew Larkum: The Dendrite Hypothesis

BI 138 Matthew Larkum: The Dendrite Hypothesis

Brain Inspired
Brain Inspired
BI 138 Matthew Larkum: The Dendrite Hypothesis
/

Matthew Larkum runs his lab at Humboldt University of Berlin, where his group studies how dendrites contribute to  computations within and across layers of the neocortex. Since the late 1990s, Matthew has continued to uncover key properties of the way pyramidal neurons stretch across layers of the cortex, their dendrites receiving inputs from those different layers – and thus different brain areas. For example, layer 5 pyramidal neurons have a set of basal dendrites near the cell body that receives feedforward-like input, and a set of apical dendrites all the way up in layer 1 that receives feedback–like input. Depending on which set of dendrites is receiving input, or neither or both, the neuron’s output functions in different modes- silent, regular spiking, or burst spiking. Matthew realized the different sets of dendritic inputs could signal different operations, often pairing feedforward sensory–like signals and feedback context-like signals. His research has shown this kind of coincidence detection is important for cognitive functions like perception, memory, learning, and even wakefulness. We discuss many of his ideas and research findings, why dendrites have long been neglected in favor of neuron cell bodies, the possibility of learning about computations by studying implementation-level phenomena, and much more.

BI 137 Brian Butterworth: Can Fish Count?

BI 137 Brian Butterworth: Can Fish Count?

Brain Inspired
Brain Inspired
BI 137 Brian Butterworth: Can Fish Count?
/

Brian Butterworth is Emeritus Professor of Cognitive Neuropsychology at University College London. In his book, Can Fish Count?: What Animals Reveal About Our Uniquely Mathematical Minds, he describes the counting and numerical abilities across many different species, suggesting our ability to count is evolutionarily very old (since many diverse species can count). We discuss many of the examples in his book, the mathematical disability dyscalculia and its relation to dyslexia, how to test counting abilities in various species, how counting may happen in brains, the promise of creating artificial networks that can do math, and many more topics.