All Episodes

BI 169 Andrea Martin: Neural Dynamics and Language

BI 169 Andrea Martin: Neural Dynamics and Language

Brain Inspired
Brain Inspired
BI 169 Andrea Martin: Neural Dynamics and Language
Loading
/

My guest today is Andrea Martin, who is the Research Group Leader in the department of Language and Computation in Neural Systems at the Max Plank Institute and the Donders Institute. Andrea is deeply interested in understanding how our biological brains process and represent language. To this end, she is developing a theoretical model of language. The aim of the model is to account for the properties of language, like its structure, its compositionality, its infinite expressibility, while adhering to physiological data we can measure from human brains.

BI 168 Frauke Sandig and Eric Black w Alex Gomez-Marin: AWARE: Glimpses of Consciousness

BI 168 Frauke Sandig and Eric Black w Alex Gomez-Marin: AWARE: Glimpses of Consciousness

Brain Inspired
Brain Inspired
BI 168 Frauke Sandig and Eric Black w Alex Gomez-Marin: AWARE: Glimpses of Consciousness
Loading
/

This is the first in a mini-series of episodes with Alex Gomez-Marin, exploring how the arts and humanities can impact (neuro)science. Artistic creations, like cinema, have the ability to momentarily lower our ever-critical scientific mindset and allow us to imagine alternate possibilities and experience emotions outside our normal scientific routines. Might this feature of art potentially change our scientific attitudes and perspectives?

BI 167 Panayiota Poirazi: AI Brains Need Dendrites

BI 167 Panayiota Poirazi: AI Brains Need Dendrites

Brain Inspired
Brain Inspired
BI 167 Panayiota Poirazi: AI Brains Need Dendrites
Loading
/

Panayiota Poirazi runs the Poirazi Lab at the FORTH Institute of Molecular Biology and Biotechnology, and Yiota loves dendrites, those branching tree-like structures sticking out of all your neurons, and she thinks you should love dendrites, too, whether you study biological or artificial intelligence. In neuroscience, the old story was that dendrites just reach out and collect incoming signals for the all-important neuron cell body to process. Yiota, and people Like Matthew Larkum, with whom I chatted in episode 138, are continuing to demonstrate that dendrites are themselves computationally complex and powerful, doing many varieties of important signal transformation before signals reach the cell body. For example, in 2003, Yiota showed that because of dendrites, a single neuron can act as a two-layer artificial neural network, and since then others have shown single neurons can act as deeper and deeper multi-layer networks.Ā  In Yiota’s opinion, an even more important function of dendrites is increased computing efficiency, something evolution favors and something artificial networks need to favor as well moving forward.

BI 166 Nick Enfield: Language vs. Reality

BI 166 Nick Enfield: Language vs. Reality

Brain Inspired
Brain Inspired
BI 166 Nick Enfield: Language vs. Reality
Loading
/

Nick Enfield is a professor of linguistics at the University of Sydney. In this episode we discuss topics in his most recent book, Language vs. Reality: Why Language Is Good for Lawyers and Bad for Scientists. A central question in the book is what is language for? What’s the function of language. You might be familiar with the debate about whether language evolved for each of us thinking our wonderful human thoughts, or for communicating those thoughts between each other. Nick would be on the communication side of that debate, but if by communication we mean simply the transmission of thoughts or information between people – I have a thought, I send it to you in language, and that thought is now in your head – then Nick wouldn’t take either side of that debate. He argues the function language goes beyond the transmission of information, and instead is primarily an evolved solution for social coordination – coordinating our behaviors and attention. When we use language, we’re creating maps in our heads so we can agree on where to go.

BI 165 Jeffrey Bowers: Psychology Gets No Respect

BI 165 Jeffrey Bowers: Psychology Gets No Respect

Brain Inspired
Brain Inspired
BI 165 Jeffrey Bowers: Psychology Gets No Respect
Loading
/

Jeffrey Bowers is a psychologist and professor at the University of Bristol. As you know, many of my previous guests are in the business of comparing brain activity to the activity of units in artificial neural network models, when humans or animals and the models are performing the same tasks. And a big story that has emerged over the past decade or so is that there’s a remarkable similarity between the activities and representations in brains and models. This was originally found in object categorization tasks, where the goal is to name the object shown in a given image, where researchers have compared the activity in the models good at doing that to the activity in the parts of our brains good at doing that. It’s been found in various other tasks using various other models and analyses, many of which we’ve discussed on previous episodes, and more recently a similar story has emerged regarding a similarity between language-related activity in our brains and the activity in large language models. Namely, the ability of our brains to predict an upcoming word can been correlated with the models ability to predict an upcoming word. So the word is that these deep learning type models are the best models of how our brains and cognition work.

BI 164 Gary Lupyan: How Language Affects Thought

BI 164 Gary Lupyan: How Language Affects Thought

Brain Inspired
Brain Inspired
BI 164 Gary Lupyan: How Language Affects Thought
Loading
/

Gary Lupyan runs the Lupyan Lab at University of Wisconsin, Madison, where he studies how language and cognition are related. In some ways, this is a continuation of the conversation I had last episode with Ellie Pavlick, in that we partly continue to discuss large language models. But Gary is more focused on how language, and naming things, categorizing things, changes our cognition related those things. How does naming something change our perception of it, and so on. He’s interested in how concepts come about, how they map onto language. So we talk about some of his work and ideas related to those topics.
And we actually start the discussion with some of Gary’s work related the variability of individual humans’ phenomenal experience, and how that affects our individual cognition. For instance, some people are more visual thinkers, others are more verbal, and there seems to be an appreciable spectrum of differences that Gary is beginning to experimentally test.

BI 163 Ellie Pavlick: The Mind of a Language Model

BI 163 Ellie Pavlick: The Mind of a Language Model

Brain Inspired
Brain Inspired
BI 163 Ellie Pavlick: The Mind of a Language Model
Loading
/

Ellie Pavlick runs her Language Understanding and Representation Lab at Brown University, where she studies lots of topics related to language. In AI, large language models, sometimes called foundation models, are all the rage these days, with their ability to generate convincing language, although they still make plenty of mistakes. One of the things Ellie is interested in is how these models work, what kinds of representations are being generated in them to produce the language they produce. So we discuss how she’s going about studying these models. For example, probing them to see whether something symbolic-like might be implemented in the models, even though they are the deep learning neural network type, which aren’t suppose to be able to work in a symbol-like manner. We also discuss whether grounding is required for language understanding – that is, whether a model that produces language well needs to connect with the real world to actually understand the text it generates. We talk about what language is for, the current limitations of large language models, how the models compare to humans, and a lot more.

BI 162 Earl K. Miller: Thoughts are an Emergent Property

BI 162 Earl K. Miller: Thoughts are an Emergent Property

Brain Inspired
Brain Inspired
BI 162 Earl K. Miller: Thoughts are an Emergent Property
Loading
/

Earl Miller runs the Miller Lab at MIT, where he studies how our brains carry out our executive functions, like working memory, attention, and decision-making. In particular he is interested in the role of the prefrontal cortex and how it coordinates with other brain areas to carry out these functions. During this episode, we talk broadly about how neuroscience has changed during Earl’s career, and how his own thoughts have changed. One thing we focus on is the increasing appreciation of brain oscillations for our cognition.Earl Miller runs the Miller Lab at MIT, where he studies how our brains carry out our executive functions, like working memory, attention, and decision-making. In particular he is interested in the role of the prefrontal cortex and how it coordinates with other brain areas to carry out these functions. During this episode, we talk broadly about how neuroscience has changed during Earl’s career, and how his own thoughts have changed. One thing we focus on is the increasing appreciation of brain oscillations for our cognition.

BI 161 Hugo Spiers: Navigation and Spatial Cognition

BI 161 Hugo Spiers: Navigation and Spatial Cognition

Brain Inspired
Brain Inspired
BI 161 Hugo Spiers: Navigation and Spatial Cognition
Loading
/

Hugo Spiers runs the Spiers Lab at University College London. In general Hugo is interested in understanding spatial cognition, like navigation, in relation to other processes like planning and goal-related behavior, and how brain areas like the hippocampus and prefrontal cortex coordinate these cognitive functions. So, in this episode, we discuss a range of his research and thoughts around those topics. You may have heard about the studies he’s been involved with for years, regarding London taxi drivers and how their hippocampus changes as a result of their grueling efforts to memorize how to best navigate London.

BI 160 Ole Jensen: Rhythms of Cognition

BI 160 Ole Jensen: Rhythms of Cognition

Brain Inspired
Brain Inspired
BI 160 Ole Jensen: Rhythms of Cognition
Loading
/

Ole Jensen is co-director of the Centre for Human Brain Health at University of Birmingham, where he runs his Neuronal Oscillations Group lab. Ole is interested in how the oscillations in our brains affect our cognition by helping to shape the spiking patterns of neurons, and by helping to allocate resources to parts of our brains that are relevant for whatever ongoing behaviors we’re performing in different contexts. People have been studying oscillations for decades, finding that different frequencies of oscillations have been linked to a bunch of different cognitive functions. Some of what we discuss today is Ole’s work on alpha oscillations, which are around 10 hertz, so 10 oscillations per second. The overarching story is that alpha oscillations are thought to inhibit or disrupt processing in brain areas that aren’t needed during a given behavior. And therefore by disrupting everything that’s not needed, resources are allocated to the brain areas that are needed. We discuss his work in the vein on attention – you may remember the episode with Carolyn Dicey-Jennings, and her ideas about how findings like Ole’s are evidence we all have selves. We also talk about the role of alpha rhythms for working memory, for moving our eyes, and for previewing what we’re about to look at before we move our eyes, and more broadly we discuss the role of oscillations in cognition in general, and of course what this might mean for developing better artificial intelligence.

BI 159 Chris Summerfield: Natural General Intelligence

BI 159 Chris Summerfield: Natural General Intelligence

Brain Inspired
Brain Inspired
BI 159 Chris Summerfield: Natural General Intelligence
Loading
/

Chris Summerfield runs the Human Information Processing Lab at University of Oxford, and he’s a research scientist at Deepmind. You may remember him from episode 95 with Sam Gershman, when we discussed ideas around the usefulness of neuroscience and psychology for AI. Since then, Chris has released his book, Natural General Intelligence: How understanding the brain can help us build AI. In the book, Chris makes the case that inspiration and communication between the cognitive sciences and AI is hindered by the different languages each field speaks. But in reality, there has always been and still is a lot of overlap and convergence about ideas of computation and intelligence, and he illustrates this using tons of historical and modern examples.

BI 158 Paul Rosenbloom: Cognitive Architectures

BI 158 Paul Rosenbloom: Cognitive Architectures

Brain Inspired
Brain Inspired
BI 158 Paul Rosenbloom: Cognitive Architectures
Loading
/

Paul Rosenbloom is Professor Emeritus of Computer Science at the University of Southern California. In the early 1980s, Paul , along with John Laird and the early AI pioneer Alan Newell, developed one the earliest and best know cognitive architectures called SOAR. A cognitive architecture, as Paul defines it, is a model of the fixed structures and processes underlying minds, and in Paul’s case the human mind. And SOAR was aimed at generating general intelligence. He doesn’t work on SOAR any more, although SOAR is still alive and well in the hands of his old partner John Laird. He did go on to develop another cognitive architecture, called Sigma, and in the intervening years between those projects, among other things Paul stepped back and explored how our various scientific domains are related, and how computing itself should be considered a great scientific domain. That’s in his book On Computing: The Fourth Great Scientific Domain.

BI 157 Sarah Robins: Philosophy of Memory

BI 157 Sarah Robins: Philosophy of Memory

Brain Inspired
Brain Inspired
BI 157 Sarah Robins: Philosophy of Memory
Loading
/

Sarah Robins is a philosopher at the University of Kansas, one a growing handful of philosophers specializing in memory. Much of her work focuses on memory traces, which is roughly the idea that somehow our memories leave a trace in our minds. We discuss memory traces themselves and how they relate to the engram (see BI 126 Randy Gallistel: Where Is the Engram?, and BI 127 TomƔs Ryan: Memory, Instinct, and Forgetting).

BI 156 Mariam Aly: Memory, Attention, and Perception

BI 156 Mariam Aly: Memory, Attention, and Perception

Brain Inspired
Brain Inspired
BI 156 Mariam Aly: Memory, Attention, and Perception
Loading
/

Mariam Aly runs the Aly lab at Columbia University, where she studies the interaction of memory, attention, and perception in brain regions like the hippocampus. The short story is that memory affects our perceptions, attention affects our memories, memories affect our attention, and these effects have signatures in neural activity measurements in our hippocampus and other brain areas. We discuss her experiments testing the nature of those interactions. We also discuss a particularly difficult stretch in Mariam’s graduate school years, and how she now prioritizes her mental health.

BI 155 Luiz Pessoa: The Entangled Brain

BI 155 Luiz Pessoa: The Entangled Brain

Brain Inspired
Brain Inspired
BI 155 Luiz Pessoa: The Entangled Brain
Loading
/

Luiz Pessoa runs his Laboratory of Cognition and Emotion at the University of Maryland, College Park, where he studies how emotion and cognition interact. On this episode, we discuss many of the topics from his latest book,
The Entangled Brain: How Perception, Cognition, and Emotion Are Woven Together, which is aimed at a general audience. The book argues we need to re-think how to study the brain.

BI 154 Anne Collins: Learning with Working Memory

BI 154 Anne Collins: Learning with Working Memory

Brain Inspired
Brain Inspired
BI 154 Anne Collins: Learning with Working Memory
Loading
/

Anne Collins runs herĀ  Computational Cognitive Neuroscience Lab at the University of California, Berkley One of the things she’s been working on for years is how our working memory plays a role in learning as well, and specifically how working memory and reinforcement learning interact to affect how we learn, depending on the nature of what we’re trying to learn. We discuss that interaction specifically. We also discuss more broadly how segregated and how overlapping and interacting our cognitive functions are, what that implies about our natural tendency to think in dichotomies – like MF vs MB-RL, system-1 vs system-2, etc., and we dive into plenty other subjects, like how to possibly incorporate these ideas into AI.

BI 153 Carolyn Dicey-Jennings: Attention and the Self

BI 153 Carolyn Dicey-Jennings: Attention and the Self

Brain Inspired
Brain Inspired
BI 153 Carolyn Dicey-Jennings: Attention and the Self
Loading
/

Carolyn Dicey Jennings is a philosopher and a cognitive scientist at University of California, Merced. In her book The Attending Mind, she lays out an attempt to unify the concept of attention. Carolyn defines attention roughly as the mental prioritization of some stuff over other stuff based on our collective interests. And one of her main claims is that attention is evidence of a real, emergent self or subject, that can’t be reduced to microscopic brain activity. She does connect attention to more macroscopic brain activity, suggesting slow longer-range oscillations in our brains can alter or entrain the activity of more local neural activity, and this is a candidate for mental causation. We unpack that more in our discussion, and how Carolyn situates attention among other cognitive functions, like consciousness, action, and perception.

BI 152 Michael L. Anderson: After Phrenology: Neural Reuse

BI 152 Michael L. Anderson: After Phrenology: Neural Reuse

Brain Inspired
Brain Inspired
BI 152 Michael L. Anderson: After Phrenology: Neural Reuse
Loading
/

Michael L. Anderson is a professor at the Rotman Institute of Philosophy, at Western University. His book, After Phrenology: Neural Reuse and the Interactive Brain, calls for a re-conceptualization of how we understand and study brains and minds. Neural reuse is the phenomenon that any given brain area is active for multiple cognitive functions, and partners with different sets of brain areas to carry out different cognitive functions. We discuss the implications for this, and other topics in Michael’s research and the book, like evolution, embodied cognition, and Gibsonian perception. Michael also fields guest questions from John Krakauer and Alex Gomez-Marin, about representations and metaphysics, respectively.