All Episodes
BI 180 Panel Discussion: Long-term Memory Encoding and Connectome Decoding
/
BI 179 Laura Gradowski: Include the Fringe with Pluralism
Laura Gradowski is a philosopher of science at the University of Pittsburgh. Pluralism, or scientific pluralism anyway, is roughly the idea that there is no unified account of any scientific field, that we should be tolerant of and welcome a variety of theoretical and conceptual frameworks, and methods, and goals, when doing science. Pluralism is kind of a buzz word right now in my little neuroscience world, but it’s an old and well-trodden notion… many philosophers have been calling for pluralism for many years. But how pluralistic should we be in our studies and explanations in science? Laura suggests we should be very, very pluralistic, and to make her case, she cites examples in the history of science of theories and theorists that were once considered “fringe” but went on to become mainstream accepted theoretical frameworks. I thought it would be fun to have her on to share her ideas about fringe theories, mainstream theories, pluralism, etc.
BI 178 Eric Shea-Brown: Neural Dynamics and Dimensions
Eric Shea-Brown is a theoretical neuroscientist and principle investigator of the working group on neural dynamics at the University of Washington. In this episode, we talk a lot about dynamics and dimensionality in neural networks… how to think about them, why they matter, how Eric’s perspectives have changed through his career.
BI 177 Special: Bernstein Workshop Panel
I was recently invited to moderate a panel at the Annual Bernstein conference – this one was in Berlin Germany. The panel I moderated was at a satellite workshop at the conference called How can machine learning be used to generate insights and theories in neuroscience? Below are the panelists. I hope you enjoy the discussion!
BI 176 David Poeppel Returns
David runs his lab at NYU, where they study auditory cognition, speech perception, language, and music. On the heels of the episode with David Glanzman, we discuss the ongoing mystery regarding how memory works, how to study and think about brains and minds, and the reemergence (perhaps) of the language of thought hypothesis.
BI 175 Kevin Mitchell: Free Agents
Kevin Mitchell is professor of genetics at Trinity College Dublin. He’s been on the podcast before, and we talked a little about his previous book, Innate – How the Wiring of Our Brains Shapes Who We Are. He’s back today to discuss his new book Free Agents: How Evolution Gave Us Free Will
BI 174 Alicia Juarrero: Context Changes Everything
In this episode, we discuss many of the topics and ideas in her new book, Context Changes Everything: How Constraints Create Coherence, which makes the thorough case that constraints should be given way more attention when trying to understand complex systems like brains and minds – how they’re organized, how they operate, how they’re formed and maintained, and so on.
BI 173 Justin Wood: Origins of Visual Intelligence
ustin Wood runs the Wood Lab at Indiana University, and his lab’s tagline is “building newborn minds in virtual worlds.” In this episode, we discuss his work comparing the visual cognition of newborn chicks and AI models. He uses a controlled-rearing technique with natural chicks, whereby the chicks are raised from birth in completely controlled visual environments.
BI 172 David Glanzman: Memory All The Way Down
David runs his lab at UCLA where he’s also a distinguished professor. David used to believe what is currently the mainstream view, that our memories are stored in our synapses, those connections between our neurons. So as we learn, the synaptic connections strengthen and weaken until their just right, and that serves to preserve the memory. That’s been the dominant view in neuroscience for decades, and is the fundamental principle that underlies basically all of deep learning in AI. But because of his own and others experiments, which he describes in this episode, David has come to the conclusion that memory must be stored not at the synapse, but in the nucleus of neurons, likely by some epigenetic mechanism mediated by RNA molecules. If this sounds familiar, I had Randy Gallistel on the the podcast on episode 126 to discuss similar ideas, and David discusses where he and Randy differ in their thoughts. This episode starts out pretty technical as David describes the series of experiments that changed his mind, but after that we broaden our discussion to a lot of the surrounding issues regarding whether and if his story about memory is true. And we discuss meta-issues like how old discarded ideas in science often find their way back, what it’s like studying non-mainstream topic, including challenges trying to get funded for it, and so on.
BI 171 Mike Frank: Early Language and Cognition
My guest is Michael C. Frank, better known as Mike Frank, who runs the Language and Cognition lab at Stanford. Mike’s main interests center on how children learn language – in particular he focuses a lot on early word learning, and what that tells us about our other cognitive functions, like concept formation and social cognition.
We discuss that, his love for developing open data sets that anyone can use,
The dance he dances between bottom-up data-driven approaches in this big data era, traditional experimental approaches, and top-down theory-driven approaches
How early language learning in children differs from LLM learning
Mike’s rational speech act model of language use, which considers the intentions or pragmatics of speakers and listeners in dialogue.
BI 170 Ali Mohebi: Starting a Research Lab
In this episode I have a casual chat with Ali Mohebi about his new faculty position and his plans for the future.
BI 169 Andrea Martin: Neural Dynamics and Language
My guest today is Andrea Martin, who is the Research Group Leader in the department of Language and Computation in Neural Systems at the Max Plank Institute and the Donders Institute. Andrea is deeply interested in understanding how our biological brains process and represent language. To this end, she is developing a theoretical model of language. The aim of the model is to account for the properties of language, like its structure, its compositionality, its infinite expressibility, while adhering to physiological data we can measure from human brains.
BI 168 Frauke Sandig and Eric Black w Alex Gomez-Marin: AWARE: Glimpses of Consciousness
This is the first in a mini-series of episodes with Alex Gomez-Marin, exploring how the arts and humanities can impact (neuro)science. Artistic creations, like cinema, have the ability to momentarily lower our ever-critical scientific mindset and allow us to imagine alternate possibilities and experience emotions outside our normal scientific routines. Might this feature of art potentially change our scientific attitudes and perspectives?
BI 167 Panayiota Poirazi: AI Brains Need Dendrites
Panayiota Poirazi runs the Poirazi Lab at the FORTH Institute of Molecular Biology and Biotechnology, and Yiota loves dendrites, those branching tree-like structures sticking out of all your neurons, and she thinks you should love dendrites, too, whether you study biological or artificial intelligence. In neuroscience, the old story was that dendrites just reach out and collect incoming signals for the all-important neuron cell body to process. Yiota, and people Like Matthew Larkum, with whom I chatted in episode 138, are continuing to demonstrate that dendrites are themselves computationally complex and powerful, doing many varieties of important signal transformation before signals reach the cell body. For example, in 2003, Yiota showed that because of dendrites, a single neuron can act as a two-layer artificial neural network, and since then others have shown single neurons can act as deeper and deeper multi-layer networks. In Yiota’s opinion, an even more important function of dendrites is increased computing efficiency, something evolution favors and something artificial networks need to favor as well moving forward.
BI 166 Nick Enfield: Language vs. Reality
Nick Enfield is a professor of linguistics at the University of Sydney. In this episode we discuss topics in his most recent book, Language vs. Reality: Why Language Is Good for Lawyers and Bad for Scientists. A central question in the book is what is language for? What’s the function of language. You might be familiar with the debate about whether language evolved for each of us thinking our wonderful human thoughts, or for communicating those thoughts between each other. Nick would be on the communication side of that debate, but if by communication we mean simply the transmission of thoughts or information between people – I have a thought, I send it to you in language, and that thought is now in your head – then Nick wouldn’t take either side of that debate. He argues the function language goes beyond the transmission of information, and instead is primarily an evolved solution for social coordination – coordinating our behaviors and attention. When we use language, we’re creating maps in our heads so we can agree on where to go.
BI 165 Jeffrey Bowers: Psychology Gets No Respect
Jeffrey Bowers is a psychologist and professor at the University of Bristol. As you know, many of my previous guests are in the business of comparing brain activity to the activity of units in artificial neural network models, when humans or animals and the models are performing the same tasks. And a big story that has emerged over the past decade or so is that there’s a remarkable similarity between the activities and representations in brains and models. This was originally found in object categorization tasks, where the goal is to name the object shown in a given image, where researchers have compared the activity in the models good at doing that to the activity in the parts of our brains good at doing that. It’s been found in various other tasks using various other models and analyses, many of which we’ve discussed on previous episodes, and more recently a similar story has emerged regarding a similarity between language-related activity in our brains and the activity in large language models. Namely, the ability of our brains to predict an upcoming word can been correlated with the models ability to predict an upcoming word. So the word is that these deep learning type models are the best models of how our brains and cognition work.
BI 164 Gary Lupyan: How Language Affects Thought
Gary Lupyan runs the Lupyan Lab at University of Wisconsin, Madison, where he studies how language and cognition are related. In some ways, this is a continuation of the conversation I had last episode with Ellie Pavlick, in that we partly continue to discuss large language models. But Gary is more focused on how language, and naming things, categorizing things, changes our cognition related those things. How does naming something change our perception of it, and so on. He’s interested in how concepts come about, how they map onto language. So we talk about some of his work and ideas related to those topics.
And we actually start the discussion with some of Gary’s work related the variability of individual humans’ phenomenal experience, and how that affects our individual cognition. For instance, some people are more visual thinkers, others are more verbal, and there seems to be an appreciable spectrum of differences that Gary is beginning to experimentally test.
BI 163 Ellie Pavlick: The Mind of a Language Model
Ellie Pavlick runs her Language Understanding and Representation Lab at Brown University, where she studies lots of topics related to language. In AI, large language models, sometimes called foundation models, are all the rage these days, with their ability to generate convincing language, although they still make plenty of mistakes. One of the things Ellie is interested in is how these models work, what kinds of representations are being generated in them to produce the language they produce. So we discuss how she’s going about studying these models. For example, probing them to see whether something symbolic-like might be implemented in the models, even though they are the deep learning neural network type, which aren’t suppose to be able to work in a symbol-like manner. We also discuss whether grounding is required for language understanding – that is, whether a model that produces language well needs to connect with the real world to actually understand the text it generates. We talk about what language is for, the current limitations of large language models, how the models compare to humans, and a lot more.