All Episodes

BI 151 Steve Byrnes: Brain-like AGI Safety

BI 151 Steve Byrnes: Brain-like AGI Safety

Brain Inspired
Brain Inspired
BI 151 Steve Byrnes: Brain-like AGI Safety
Loading
/

Steve Byrnes is a physicist turned AGI safety researcher. He’s concerned that when we create AGI, whenever and however that might happen, we run the risk of creating it in a less than perfectly safe way. AGI safety (AGI not doing something bad) is a wide net that encompasses AGI alignment (AGI doing what we want it to do). We discuss a host of ideas Steve writes about in his Intro to Brain-Like-AGI Safety blog series, which uses what he has learned about brains to address how we might safely make AGI.

BI 150 Dan Nicholson: Machines, Organisms, Processes

BI 150 Dan Nicholson: Machines, Organisms, Processes

Brain Inspired
Brain Inspired
BI 150 Dan Nicholson: Machines, Organisms, Processes
Loading
/

Dan Nicholson is a philosopher at George Mason University. He incorporates the history of science and philosophy into modern analyses of our conceptions of processes related to life and organisms. He is also interested in re-orienting our conception of the universe as made fundamentally of things/substances, and replacing it with the idea the universe is made fundamentally of processes (process philosophy). In this episode, we both of those subjects, the why the “machine conception of the organism” is incorrect, how to apply these ideas to topics like neuroscience and artificial intelligence, and much more.

BI 149 William B. Miller: Cell Intelligence

BI 149 William B. Miller: Cell Intelligence

Brain Inspired
Brain Inspired
BI 149 William B. Miller: Cell Intelligence
Loading
/

William B. Miller is an ex-physician turned evolutionary biologist. In this episode, we discuss topics related to his new book, Bioverse: How the Cellular World Contains the Secrets to Life’s Biggest Questions. The premise of the book is that all individual cells are intelligent in their own right, and possess a sense of self. From this, Bill makes the case that cells cooperate with other cells to engineer whole organisms that in turn serve as wonderful hosts for the myriad cell types. Further, our bodies are collections of our own cells (with our DNA), and an enormous amount and diversity of foreign cells – our microbiome – that communicate and cooperate with each other and with our own cells. We also discuss how cell intelligence compares to human intelligence, what Bill calls the “era of the cell” in science, how the future of medicine will harness the intelligence of cells and their cooperative nature, and much more.

BI 148 Gaute Einevoll: Brain Simulations

BI 148 Gaute Einevoll: Brain Simulations

Brain Inspired
Brain Inspired
BI 148 Gaute Einevoll: Brain Simulations
Loading
/

Gaute Einevoll is a professor at the University of Oslo and Norwegian University of Life Sciences. Use develops detailed models of brain networks to use as simulations, so neuroscientists can test their various theories and hypotheses about how networks implement various functions. Thus, the models are tools. The goal is to create models that are multi-level, to test questions at various levels of biological detail; and multi-modal, to predict that handful of signals neuroscientists measure from real brains (something Gaute calls “measurement physics”). We also discuss Gaute’s thoughts on Carina Curto’s “beautiful vs ugly models”, and his reaction to Noah Hutton’s In Silico documentary about the Blue Brain and Human Brain projects (Gaute has been funded by the Human Brain Project since its inception).

BI 147 Noah Hutton: In Silico

BI 147 Noah Hutton: In Silico

Brain Inspired
Brain Inspired
BI 147 Noah Hutton: In Silico
Loading
/

Noah Hutton writes, directs, and scores documentary and narrative films. On this episode, we discuss his documentary In Silico. In 2009, Noah watched a TED talk by Henry Markram, in which Henry claimed it would take 10 years to fully simulate a human brain. This claim inspired Noah to chronicle the project, visiting Henry and his team periodically throughout. The result was In Silico, which tells the science, human, and social story of Henry’s massively funded projects – the Blue Brain Project and the Human Brain Project.

BI 146 Lauren Ross: Causal and Non-Causal Explanation

BI 146 Lauren Ross: Causal and Non-Causal Explanation

Brain Inspired
Brain Inspired
BI 146 Lauren Ross: Causal and Non-Causal Explanation
Loading
/

Lauren Ross is an Associate Professor at the University of California, Irvine. She studies and writes about causal and non-causal explanations in philosophy of science, including distinctions among causal structures. Throughout her work, Lauren employs Jame’s Woodward’s interventionist approach to causation, which Jim and I discussed in episode 145. In this episode, we discuss Jim’s lasting impact on the philosophy of causation, the current dominance of mechanistic explanation and its relation to causation, and various causal structures of explanation, including pathways, cascades, topology, and constraints.

BI 145 James Woodward: Causation with a Human Face

BI 145 James Woodward: Causation with a Human Face

Brain Inspired
Brain Inspired
BI 145 James Woodward: Causation with a Human Face
Loading
/

James Woodward is a recently retired Professor from the Department of History and Philosophy of Science at the University of Pittsburgh. Jim has tremendously influenced the field of causal explanation in the philosophy of science. His account of causation centers around intervention – intervening on a cause should alter its effect. From this minimal notion, Jim has described many facets and varieties of causal structures. In this episode, we discuss topics from his recent book, Causation with a Human Face: Normative Theory and Descriptive Psychology. In the book, Jim advocates that how we should think about causality – the normative – needs to be studied together with how we actually do think about causal relations in the world – the descriptive. We discuss many topics around this central notion, epistemology versus metaphysics, the the nature and varieties of causal structures.

BI 144 Emily M. Bender and Ev Fedorenko: Large Language Models

BI 144 Emily M. Bender and Ev Fedorenko: Large Language Models

Brain Inspired
Brain Inspired
BI 144 Emily M. Bender and Ev Fedorenko: Large Language Models
Loading
/

Large language models, often now called “foundation models”, are the model de jour in AI, based on the transformer architecture. In this episode, I bring together Evelina Fedorenko and Emily M. Bender to discuss how language models stack up to our own language processing and generation (models and brains both excel at next-word prediction), whether language evolved in humans for complex thoughts or for communication (communication, says Ev), whether language models grasp the meaning of the text they produce (Emily says no), and much more.

BI 143 Rodolphe Sepulchre: Mixed Feedback Control

BI 143 Rodolphe Sepulchre: Mixed Feedback Control

Brain Inspired
Brain Inspired
BI 143 Rodolphe Sepulchre: Mixed Feedback Control
Loading
/

Rodolphe Sepulchre is a control engineer and theorist at Cambridge University. He focuses on applying feedback control engineering principles to build circuits that model neurons and neuronal circuits. We discuss his work on mixed feedback control – positive and negative – as an underlying principle of the mixed digital and analog brain signals,, the role of neuromodulation as a controller, applying these principles to Eve Marder’s lobster/crab neural circuits, building mixed-feedback neuromorphics, some feedback control history, and how “If you wish to contribute original work, be prepared to face loneliness,” among other topics

BI 142 Cameron Buckner: The New DoGMA

BI 142 Cameron Buckner: The New DoGMA

Brain Inspired
Brain Inspired
BI 142 Cameron Buckner: The New DoGMA
Loading
/

Cameron Buckner is a philosopher and cognitive scientist at The University of Houston. He is writing a book about the age-old philosophical debate on how much of our knowledge is innate (nature, rationalism) versus how much is learned (nurture, empiricism). In the book and his other works, Cameron argues that modern AI can help settle the debate. In particular, he suggests we focus on what types of psychological “domain-general faculties” underlie our own intelligence, and how different kinds of deep learning models are revealing how those faculties may be implemented in our brains. The hope is that by building systems that possess the right handful of faculties, and putting those systems together in a way they can cooperate in a general and flexible manner, it will result in cognitive architectures we would call intelligent. Thus, what Cameron calls The New DoGMA: Domain-General Modular Architecture. We also discuss his work on mental representation and how representations get their content – how our thoughts connect to the natural external world.

BI 141 Carina Curto: From Structure to Dynamics

BI 141 Carina Curto: From Structure to Dynamics

Brain Inspired
Brain Inspired
BI 141 Carina Curto: From Structure to Dynamics
Loading
/

Carina Curto is a professor in the Department of Mathematics at The Pennsylvania State University. She uses her background skills in mathematical physics/string theory to study networks of neurons. On this episode, we discuss the world of topology in neuroscience – the study of the geometrical structures mapped out by active populations of neurons. We also discuss her work on “combinatorial linear threshold networks” (CLTNs). Unlike the large deep learning models popular today as models of brain activity, the CLTNs Carina builds are relatively simple, abstracted graphical models. This property is important to Carina, whose goal is to develop mathematically tractable neural network models. Carina has worked out how the structure of many CLTNs allows prediction of the model’s allowable dynamics, how motifs of model structure can be embedded in larger models while retaining their dynamical features, and more. The hope is that these elegant models can tell us more about the principles our messy brains employ to generate the robust and beautiful dynamics underlying our cognition.

BI 140 Jeff Schall: Decisions and Eye Movements

BI 140 Jeff Schall: Decisions and Eye Movements

Brain Inspired
Brain Inspired
BI 140 Jeff Schall: Decisions and Eye Movements
Loading
/

Jeff Schall is the director of the Center for Visual Neurophysiology at York University, where he runs the Schall Lab. His research centers around studying the mechanisms of our decisions, choices, movement control, and attention within the saccadic eye movement brain systems and in mathematical psychology models- in other words, how we decide where and when to look. Jeff was my postdoctoral advisor at Vanderbilt University, and I wanted to revisit a few guiding principles he instills in all his students. Linking Propositions by Davida Teller are a series of logical statements to ensure we rigorously connect the brain activity we record to the psychological functions we want to explain. Strong Inference by John Platt is the scientific method on steroids – a way to make our scientific practice most productive and efficient. We discuss both of these topics in the context of Jeff’s eye movement and decision-making science. We also discuss how neurophysiology has changed over the past 30 years, we compare the relatively small models he employs with the huge deep learning models, many of his current projects, and plenty more. If you want to learn more about Jeff’s work and approach, I recommend reading in order two of his review papers we discuss as well. One was written 20 years ago (On Building a Bridge Between Brain and Behavior), and the other 2-ish years ago (Accumulators, Neurons, and Response Time).

BI 139 Marc Howard: Compressed Time and Memory

BI 139 Marc Howard: Compressed Time and Memory

Brain Inspired
Brain Inspired
BI 139 Marc Howard: Compressed Time and Memory
Loading
/

Marc Howard runs his Theoretical Cognitive Neuroscience Lab at Boston University, where he develops mathematical models of cognition, constrained by psychological and neural data. In this episode, we discuss the idea that a Laplace transform and its inverse may serve as a unified framework for memory. In short, our memories are compressed on a continuous log-scale: as memories get older, their representations “spread out” in time. It turns out this kind of representation seems ubiquitous in the brain and across cognitive functions, suggesting it is likely a canonical computation our brains use to represent a wide variety of cognitive functions. We also discuss some of the ways Marc is incorporating this mathematical operation in deep learning nets to improve their ability to handle information at different time scales.

BI 138 Matthew Larkum: The Dendrite Hypothesis

BI 138 Matthew Larkum: The Dendrite Hypothesis

Brain Inspired
Brain Inspired
BI 138 Matthew Larkum: The Dendrite Hypothesis
Loading
/

Matthew Larkum runs his lab at Humboldt University of Berlin, where his group studies how dendrites contribute to  computations within and across layers of the neocortex. Since the late 1990s, Matthew has continued to uncover key properties of the way pyramidal neurons stretch across layers of the cortex, their dendrites receiving inputs from those different layers – and thus different brain areas. For example, layer 5 pyramidal neurons have a set of basal dendrites near the cell body that receives feedforward-like input, and a set of apical dendrites all the way up in layer 1 that receives feedback–like input. Depending on which set of dendrites is receiving input, or neither or both, the neuron’s output functions in different modes- silent, regular spiking, or burst spiking. Matthew realized the different sets of dendritic inputs could signal different operations, often pairing feedforward sensory–like signals and feedback context-like signals. His research has shown this kind of coincidence detection is important for cognitive functions like perception, memory, learning, and even wakefulness. We discuss many of his ideas and research findings, why dendrites have long been neglected in favor of neuron cell bodies, the possibility of learning about computations by studying implementation-level phenomena, and much more.

BI 137 Brian Butterworth: Can Fish Count?

BI 137 Brian Butterworth: Can Fish Count?

Brain Inspired
Brain Inspired
BI 137 Brian Butterworth: Can Fish Count?
Loading
/

Brian Butterworth is Emeritus Professor of Cognitive Neuropsychology at University College London. In his book, Can Fish Count?: What Animals Reveal About Our Uniquely Mathematical Minds, he describes the counting and numerical abilities across many different species, suggesting our ability to count is evolutionarily very old (since many diverse species can count). We discuss many of the examples in his book, the mathematical disability dyscalculia and its relation to dyslexia, how to test counting abilities in various species, how counting may happen in brains, the promise of creating artificial networks that can do math, and many more topics.

BI 136 Michel Bitbol and Alex Gomez-Marin: Phenomenology

BI 136 Michel Bitbol and Alex Gomez-Marin: Phenomenology

Brain Inspired
Brain Inspired
BI 136 Michel Bitbol and Alex Gomez-Marin: Phenomenology
Loading
/

We discuss phenomenology as an alternative perspective on our scientific endeavors. Although we like to believe our science is objective and explains the reality of the world we inhabit, we can’t escape the fact that all of our scientific knowledge comes through our perceptions and interpretations as conscious living beings. Michel has used phenomenology to resolve many of the paradoxes that quantum mechanics generates when it is understood as a description of reality, and more recently he has applied phenomenology to the philosophy of mind and consciousness. Alex is currently trying to apply the phenomenological approach to his research on brains and behavior. Much of our conversation revolves around how phenomenology and our “normal” scientific explorations can co-exist, including the study of minds, brains, and intelligence- our own and that of other organisms. We also discuss the “blind spot” of science, the history and practice of phenomenology, various kinds of explanation, the language we use to describe things, and more.

BI 135 Elena Galea: The Stars of the Brain

BI 135 Elena Galea: The Stars of the Brain

Brain Inspired
Brain Inspired
BI 135 Elena Galea: The Stars of the Brain
Loading
/

Brains are often conceived as consisting of neurons and “everything else.” As Elena discusses, the “everything else,” including glial cells and in particular astrocytes, have largely been ignored in neuroscience. That’s partly because the fast action potentials of neurons have been assumed to underlie computations in the brain, and because technology only recently afforded closer scrutiny of astrocyte activity. Now that we can record calcium signaling in astrocytes, it’s possible to relate how astrocyte signaling with each other and with neurons may complement the cognitive roles once thought the sole domain of neurons. Although the computational role of astrocytes remains unclear, it is clear that astrocytes interact with neurons and neural circuits in dynamic and interesting ways. We talk about the historical story of astrocytes, the emerging modern story, and Elena shares her views on the path forward to understand astrocyte function in cognition, disease, homeostasis, and – Elena’s favorite current hypothesis – their integrative role in negative feedback control.

BI 134 Mandyam Srinivasan: Bee Flight and Cognition

BI 134 Mandyam Srinivasan: Bee Flight and Cognition

Brain Inspired
Brain Inspired
BI 134 Mandyam Srinivasan: Bee Flight and Cognition
Loading
/

Srini is Emeritus Professor at Queensland Brain Institute in Australia. In this episode, he shares his wide range of behavioral experiments elucidating the principles of flight and navigation in insects. We discuss how bees use optic flow signals to determine their speed, distance, proximity to objects, and to gracefully land. These abilities are largely governed via control systems, balancing incoming perceptual signals with internal reference signals. We also talk about a few of the aerial robotics projects his research has inspired, many of the other cognitive skills bees can learn, the possibility of their feeling pain , and the nature of their possible subjective conscious experience.