All Episodes

BI 115 Steve Grossberg: Conscious Mind, Resonant Brain

BI 115 Steve Grossberg: Conscious Mind, Resonant Brain

Brain Inspired
Brain Inspired
BI 115 Steve Grossberg: Conscious Mind, Resonant Brain
Loading
/

Steve and I discuss his book Conscious Mind, Resonant Brain: How Each Brain Makes a Mind.  The book is a huge collection of his models and their predictions and explanations for a wide array of cognitive brain functions. Many of the models spring from his Adaptive Resonance Theory (ART) framework, which explains how networks of neurons deal with changing environments while maintaining self-organization and retaining learned knowledge. ART led Steve to the hypothesis that all conscious states are resonant states, which we discuss. There are also guest questions from György BuzsĂ¡ki, Jay McClelland, and John Krakauer.

BI 114 Mark Sprevak and Mazviita Chirimuuta: Computation and the Mind

BI 114 Mark Sprevak and Mazviita Chirimuuta: Computation and the Mind

Brain Inspired
Brain Inspired
BI 114 Mark Sprevak and Mazviita Chirimuuta: Computation and the Mind
Loading
/

Mark and Mazviita discuss the philosophy and science of mind, and how to think about computations with respect to understanding minds. Current approaches to explaining brain function are dominated by computational models and the computer metaphor for brain and mind. But there are alternative ways to think about the relation between computations and brain function, which we explore in the discussion. We also talk about the role of philosophy broadly and with respect to mind sciences, pluralism and perspectival approaches to truth and understanding, the prospects and desirability of naturalizing representations (accounting for how brain representations relate to the natural world), and much more.

BI 113 David Barack and John Krakauer: Two Views On Cognition

BI 113 David Barack and John Krakauer: Two Views On Cognition

Brain Inspired
Brain Inspired
BI 113 David Barack and John Krakauer: Two Views On Cognition
Loading
/

David and John discuss some of the concepts from their recent paper Two Views on the Cognitive Brain, in which they argue the recent population-based dynamical systems approach is a promising route to understanding brain activity underpinning higher cognition. We discuss mental representations, the kinds of dynamical objects being used for explanation, and much more, including David’s perspectives as a practicing neuroscientist and philosopher.

BI ViDA Panel Discussion: Deep RL and Dopamine

BI ViDA Panel Discussion: Deep RL and Dopamine

Brain Inspired
Brain Inspired
BI ViDA Panel Discussion: Deep RL and Dopamine
Loading
/

What can artificial intelligence teach us about how the brain uses dopamine to learn? Recent advances in artificial intelligence have yielded novel algorithms for reinforcement
learning (RL), which leverage the power of deep learning together with reward prediction error signals in order to achieve unprecedented performance in complex tasks. In the brain, reward prediction error signals are thought to be signaled by midbrain dopamine neurons and support learning. Can these new advances in Deep RL help us understand the role that dopamine plays in learning? In this panel experts in both theoretical and experimental dopamine research will
discuss this question.

BI 112 Ali Mohebi and Ben Engelhard: The Many Faces of Dopamine

BI 112 Ali Mohebi and Ben Engelhard: The Many Faces of Dopamine

Brain Inspired
Brain Inspired
BI 112 Ali Mohebi and Ben Engelhard: The Many Faces of Dopamine
Loading
/

Ali and Ben discuss the ever-expanding discoveries about the roles dopamine plays for our cognition. Dopamine is known to play a role in learning – dopamine (DA) neurons fire when our reward expectations aren’t met, and that signal helps adjust our expectation. Roughly, DA corresponds to a reward prediction error. The reward prediction error has helped reinforcement learning in AI develop into a raging success, specially with deep reinforcement learning models trained to out-perform humans in games like chess and Go. But DA likely contributes a lot more to brain function. We discuss many of those possible roles, how to think about computation with respect to neuromodulators like DA, how different time and spatial scales interact, and more.

BI NMA 06: Advancing Neuro Deep Learning Panel

BI NMA 06: Advancing Neuro Deep Learning Panel

Brain Inspired
Brain Inspired
BI NMA 06: Advancing Neuro Deep Learning Panel
Loading
/

This is the 6th in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. This is the 3rd of 3 in the deep learning series. In this episode, the panelists discuss their experiences with advanced topics in deep learning; unsupervised & self-supervised learning, reinforcement learning, continual learning/causality.

BI NMA 05: NLP and Generative Models Panel

BI NMA 05: NLP and Generative Models Panel

Brain Inspired
Brain Inspired
BI NMA 05: NLP and Generative Models Panel
Loading
/

This is the 5th in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. This is the 2nd of 3 in the deep learning series. In this episode, the panelists discuss their experiences “doing more with fewer parameters: Convnets, RNNs, attention & transformers, generative models (VAEs & GANs).

BI NMA 04: Deep Learning Basics Panel

BI NMA 04: Deep Learning Basics Panel

Brain Inspired
Brain Inspired
BI NMA 04: Deep Learning Basics Panel
Loading
/

This is the 4th in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. This is the first of 3 in the deep learning series. In this episode, the panelists discuss their experiences with some basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, & regularization.

BI 111 Kevin Mitchell and Erik Hoel: Agency, Emergence, Consciousness

BI 111 Kevin Mitchell and Erik Hoel: Agency, Emergence, Consciousness

Brain Inspired
Brain Inspired
BI 111 Kevin Mitchell and Erik Hoel: Agency, Emergence, Consciousness
Loading
/

Erik, Kevin, and I discuss… well a lot of things.
We talk about both books, then dive deeper into topics like whether brains evolved for moving our bodies vs. consciousness, how information theory is lending insights to emergent phenomena, and the role of agency with respect to what counts as intelligence.

BI NMA 03: Stochastic Processes Panel

BI NMA 03: Stochastic Processes Panel

Brain Inspired
Brain Inspired
BI NMA 03: Stochastic Processes Panel
Loading
/

This is the third in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. In this episode, the panelists discuss their experiences with stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality.

BI NMA 02: Dynamical Systems Panel

BI NMA 02: Dynamical Systems Panel

Brain Inspired
Brain Inspired
BI NMA 02: Dynamical Systems Panel
Loading
/

This is the second in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. In this episode, the panelists discuss their experiences with linear systems, real neurons, and dynamic networks.

BI NMA 01: Machine Learning Panel

BI NMA 01: Machine Learning Panel

Brain Inspired
Brain Inspired
BI NMA 01: Machine Learning Panel
Loading
/

This is the first in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. In this episode, the panelists discuss their experiences with model fitting, GLMs/machine learning, dimensionality reduction, and deep learning.

BI 110 Catherine Stinson and Jessica Thompson: Neuro-AI Explanation

BI 110 Catherine Stinson and Jessica Thompson: Neuro-AI Explanation

Brain Inspired
Brain Inspired
BI 110 Catherine Stinson and Jessica Thompson: Neuro-AI Explanation
Loading
/

Catherine, Jess, and I use some of the ideas from their recent papers to discuss how different types of explanations in neuroscience and AI could be unified into explanations of intelligence, natural or artificial. Catherine has written about how models are related to the target system they are built to explain. She suggests both the model and the target system should be considered as instantiations of a specific kind of phenomenon, and explanation is a product of relating the model and the target system to that specific aspect they both share. Jess has suggested we shift our focus of explanation from objects – like a brain area or a deep learning model – to the shared class of phenomenon performed by those objects. Doing so may help bridge the gap between the different forms of explanation currently used in neuroscience and AI. We also discuss Henk de Regt’s conception of scientific understanding and its relation to explanation (they’re different!), and plenty more.

BI 109 Mark Bickhard: Interactivism

BI 109 Mark Bickhard: Interactivism

Brain Inspired
Brain Inspired
BI 109 Mark Bickhard: Interactivism
Loading
/

Mark and I discuss a wide range of topics surrounding his Interactivism framework for explaining cognition. Interactivism stems from Mark’s account of representations and how what we represent in our minds is related to the external world – a challenge that has plagued the mind-body problem since the beginning. Basically, representations are anticipated interactions with the world, that can be true (if enacting one helps an organism maintain its thermodynamic relation with the world) or false (if it doesn’t). And representations are functional, in that they function to maintain far from equilibrium thermodynamics for the organism for self-maintenance. Over the years, Mark has filled out Interactivism, starting with a process metaphysics foundation and building from there to account for representations, how our brains might implement representations, and why AI is hindered by our modern “encoding” version of representation. We also compare interactivism to other similar frameworks, like enactivism, predictive processing, and the free energy principle

BI 108 Grace Lindsay: Models of the Mind

BI 108 Grace Lindsay: Models of the Mind

Brain Inspired
Brain Inspired
BI 108 Grace Lindsay: Models of the Mind
Loading
/

Grace and I discuss her new book Models of the Mind, about the blossoming and conceptual foundations of the computational approach to study minds and brains. Each chapter of the book focuses on one major topic and provides historical context, the major concepts that connect models to brain functions, and the current landscape of related research endeavors. We cover a handful of those during the episode, including the birth of AI, the difference between math in physics and neuroscience, determining the neural code and how Shannon information theory plays a role, whether it’s possible to guess a brain function based on what we know about some brain structure, “grand unified theories” of the brain. We also digress and explore topics beyond the book.

BI 107 Steve Fleming: Know Thyself

BI 107 Steve Fleming: Know Thyself

Brain Inspired
Brain Inspired
BI 107 Steve Fleming: Know Thyself
Loading
/

Steve and I discuss many topics from his new book Know Thyself: The Science of Self-Awareness. The book covers the full range of what we know about metacognition and self-awareness, including how brains might underlie metacognitive behavior, computational models to explain mechanisms of metacognition, how and why self-awareness evolved, its role and potential origins in theory of mind and social interaction, and how our metacognitive skills develop over our lifetimes. We also discuss what it might look like when we are able to build metacognitive AI, and whether that’s even a good idea.

BI 106 Jacqueline Gottlieb and Robert Wilson: Deep Curiosity

BI 106 Jacqueline Gottlieb and Robert Wilson: Deep Curiosity

Brain Inspired
Brain Inspired
BI 106 Jacqueline Gottlieb and Robert Wilson: Deep Curiosity
Loading
/

Jackie and Bob discuss their research and thinking about curiosity. We also discuss how one should go about their career (qua curiosity), how eye movements compare with other windows into cognition, and whether we can and should create curious AI agents (Bob is an emphatic yes, and Jackie is slightly worried that will be the time to worry about AI).

BI 105 Sanjeev Arora: Off the Convex Path

BI 105 Sanjeev Arora: Off the Convex Path

Brain Inspired
Brain Inspired
BI 105 Sanjeev Arora: Off the Convex Path
Loading
/

Sanjeev and I discuss some of the progress toward understanding how deep learning works, specially under previous assumptions it wouldn’t or shouldn’t work as well as it does. Deep learning poses a challenge for mathematics, because its methods aren’t rooted in mathematical theory and therefore are a “black box” for math to open. We discuss how Sanjeev thinks optimization, the common framework for thinking of how deep nets learn, is the wrong approach. Instead, a promising alternative focuses on the learning trajectories that occur as a result of different learning algorithms. We discuss two examples of his research to illustrate this: creating deep nets with infinitely large layers (and the networks still find solutions among the infinite possible solutions!), and massively increasing the learning rate during training (the opposite of accepted wisdom, and yet, again, the network finds solutions!). We also discuss his past focus on computational complexity and how he doesn’t share the current neuroscience optimism comparing brains to deep nets.