All Episodes
BI 065 Thomas Serre: How Recurrence Helps Vision
Thomas and I discuss the role of recurrence in visual cognition: how brains somehow excel with so few “layers” compared to deep nets, how feedback recurrence can underlie visual reasoning, how LSTM gate-like processing could explain the function of canonical cortical microcircuits, the current limitations of deep learning networks like adversarial examples, and a bit of history in modeling our hierarchical visual system, including his work with the HMAX model and interacting with the deep learning folks as convolutional neural networks were being developed.
BI 064 Galit Shmueli: Explanation vs. Prediction
Galit and I discuss the independent roles of prediction and explanation in scientific models, their history and eventual separation in the philosophy of science, how they can inform each other, and how statisticians like Galit view the current deep learning explosion.
BI 063 Uri Hasson: The Way Evolution Does It
Uri and I discuss his recent perspective that conceives of brains as super-over-parameterized models that try to fit everything as exactly as possible rather than trying to abstract the world into usable models. He was inspired by the way artificial neural networks overfit data when they can, and how evolution works the same way on a much slower timescale.
BI 062 Stefan Leijnen: Creativity and Constraint
Stefan and I discuss creativity and constraint in artificial and biological intelligence. We talk about his Asimov Institute and its goal of artificial creativity and constraint, different types and functions of creativity, the neuroscience of creativity and its relation to intelligence, how constraint is an essential factor in all creative processes, and how computational accounts of intelligence may need to be discarded to account for our unique creative abilities.
BI 061 Jörn Diedrichsen and Niko Kriegeskorte: Brain Representations
Jörn, Niko and I continue the discussion of mental representation from last episode with Michael Rescorla, then we discuss their review paper, Peeling The Onion of Brain Representations, about different ways to extract and understand what information is represented in measured brain activity patterns.
BI 060 Michael Rescorla: Mind as Representation Machine
Michael and I discuss the philosophy and a bit of history of mental representation including the computational theory of mind and the language of thought hypothesis, how science and philosophy interact, how representation relates to computation in brains and machines, levels of computational explanation, and we discuss some examples of representational approaches to mental processes like bayesian modeling.
BI 059 Wolfgang Maass: How Do Brains Compute?
In this second part of my discussion with Wolfgang (check out the first part), we talk about spiking neural networks in general, principles of brain computation he finds promising for implementing better network models, and we quickly overview some of his recent work on using these principles to build models with biologically plausible learning mechanisms, a spiking network analog of the well-known LSTM recurrent network, and meta-learning using reservoir computing.
BI 058 Wolfgang Maass: Computing Brains and Spiking Nets
In this first part of our conversation, Wolfgang and I discuss the state of theoretical and computational neuroscience, and how experimental results in neuroscience should guide theories and models to understand and explain how brains compute. We also discuss brain-machine interfaces, neuromorphics, and more. In the next part (to be released soon), we discuss principles of brain processing to inform and constrain theories of computations, and we briefly talk about some of his most recent work making spiking neural networks that incorporate some of these brain processing principles.
BI 057 Nicole Rust: Visual Memory and Novelty
Nicole and I discuss how a signature for visual memory can be coded among the same population of neurons known to encode object identity, how the same coding scheme arises in convolutional neural networks trained to identify objects, and how neuroscience and machine learning (reinforcement learning) can join forces to understand how curiosity and novelty drive efficient learning.
BI 056 Tom Griffiths: The Limits of Cognition
I speak with Tom Griffiths about his “resource-rational framework”, inspired by Herb Simon’s bounded rationality and Stuart Russel’s bounded optimality concepts. The resource-rational framework illuminates how the constraints of optimizing our available cognition can help us understand what algorithms our brains use to get things done, and can serve as a bridge between Marr’s computational, algorithmic, and implementation levels of understanding. We also talk cognitive prostheses, artificial general intelligence, consciousness, and more.
BI 055 Thomas Naselaris: Seeing Versus Imagining
Thomas and I talk about what happens in the brain’s visual system when you see something versus imagine it. He uses generative encoding and decoding models and brain signals like fMRI and EEG to test the nature of mental imagery. We also discuss the huge fMRI dataset of natural images he’s collected to infer models of the entire visual system, how we’ve still not tapped the potential of fMRI, and more.
BI 054 Kanaka Rajan: How Do We Switch Behaviors?
Kanaka and I discuss a few different ways she uses recurrent neural networks to understand how brains give rise to behaviors. We talk about her work showing how neural circuits transition from active to passive coping behavior in zebrafish, and how RNNs could be used to understand how we switch tasks in general and how we multi-task. Plus the usual fun speculation, advice, and more.
BI 053 Jon Brennan: Linguistics in Minds and Machines
Jon and I discuss understanding the syntax and semantics of language in our brains. He uses linguistic knowledge at the level of sentence and words, neurocomputational models, and neural data like EEG and fMRI to figure out how we process and understand language while listening to the natural language found in everyday conversations and stories. I also get his take on the current state of natural language processing and other AI advances, and how linguistics, neurolinguistics, and AI can contribute to each other.
BI 052 Andrew Saxe: Deep Learning Theory
Andrew and I discuss his work exploring how various facets of deep networks contribute to their function, i.e. deep network theory. We talk about what he’s learned by studying linear deep networks and asking how depth and initial weights affect learning dynamics, when replay is appropriate (and when it’s not), how semantics develop, and what it all might tell us about deep learning in brains.
BI 051 Jess Hamrick: Mental Simulation and Construction
Jess and I discuss construction using graph neural networks. She makes AI agents that build structures to solve tasks in a simulated blocks and glue world using graph neural networks and deep reinforcement learning. We also discuss her work modeling mental simulation in humans and how it could be implemented in machines, and plenty more.
BI 050 Kyle Dunovan: Academia to Industry
/
BI 049 Phillip Alvelda: Trustworthy Brain Machines
Phillip and I discuss his company Brainworks, which uses the latest neuroscience to build AI into its products. We talk about their first product, Ambient Biometrics, that measures vital signs using your smartphone’s camera. We also dive into entrepreneurship in the AI startup world, ethical issues in AI, his early days using neural networks at NASA, where he thinks this is all headed, and more.
BI 048 Liz Spelke: What Makes Us Special?
Liz and I discuss her work on cognitive development, specially in infants, and what it can tell us about what makes human cognition different from other animals, what core cognitive abilities we’re born with, and how those abilities may form the foundation for much of our other cognitive abilities to develop. We also talk about natural language as the potential key faculty that synthesizes our early core abilities into the many higher cognitive functions that make us unique as a species, the potential for AI to capitalize on what we know about cognition in infants, plus plenty more.