BI 116 Michael W. Cole: Empirical Neural Networks

BI 116 Michael W. Cole: Empirical Neural Networks

Brain Inspired
Brain Inspired
BI 116 Michael W. Cole: Empirical Neural Networks
Loading
/

Mike and I discuss his modeling approach to study cognition. Many people I have on the podcast use deep neural networks to study brains, where the idea is to train or optimize the model to perform a task, then compare the model properties with brain properties. Mike’s approach is different in at least two ways. For one, he builds the architecture of his models using structural connectivity data from fMRI recordings. Two, he doesn’t train his models; instead, he uses functional connectivity data from the fMRI recordings to assign weights between nodes of the network (in deep learning, the weights are learned through lots of training). Mike calls his networks empirically-estimated neural networks (ENNs), and/or network coding models. We walk through his approach, what we can learn from models like ENNs, discuss some of his earlier work on cognitive control and our ability to flexibly adapt to new task rules through instruction, and he fields questions from Kanaka Rajan, Kendrick Kay, and Patryk Laurent.

BI 115 Steve Grossberg: Conscious Mind, Resonant Brain

BI 115 Steve Grossberg: Conscious Mind, Resonant Brain

Brain Inspired
Brain Inspired
BI 115 Steve Grossberg: Conscious Mind, Resonant Brain
Loading
/

Steve and I discuss his book Conscious Mind, Resonant Brain: How Each Brain Makes a Mind.  The book is a huge collection of his models and their predictions and explanations for a wide array of cognitive brain functions. Many of the models spring from his Adaptive Resonance Theory (ART) framework, which explains how networks of neurons deal with changing environments while maintaining self-organization and retaining learned knowledge. ART led Steve to the hypothesis that all conscious states are resonant states, which we discuss. There are also guest questions from György Buzsáki, Jay McClelland, and John Krakauer.

BI 114 Mark Sprevak and Mazviita Chirimuuta: Computation and the Mind

BI 114 Mark Sprevak and Mazviita Chirimuuta: Computation and the Mind

Brain Inspired
Brain Inspired
BI 114 Mark Sprevak and Mazviita Chirimuuta: Computation and the Mind
Loading
/

Mark and Mazviita discuss the philosophy and science of mind, and how to think about computations with respect to understanding minds. Current approaches to explaining brain function are dominated by computational models and the computer metaphor for brain and mind. But there are alternative ways to think about the relation between computations and brain function, which we explore in the discussion. We also talk about the role of philosophy broadly and with respect to mind sciences, pluralism and perspectival approaches to truth and understanding, the prospects and desirability of naturalizing representations (accounting for how brain representations relate to the natural world), and much more.

BI 113 David Barack and John Krakauer: Two Views On Cognition

BI 113 David Barack and John Krakauer: Two Views On Cognition

Brain Inspired
Brain Inspired
BI 113 David Barack and John Krakauer: Two Views On Cognition
Loading
/

David and John discuss some of the concepts from their recent paper Two Views on the Cognitive Brain, in which they argue the recent population-based dynamical systems approach is a promising route to understanding brain activity underpinning higher cognition. We discuss mental representations, the kinds of dynamical objects being used for explanation, and much more, including David’s perspectives as a practicing neuroscientist and philosopher.

BI ViDA Panel Discussion: Deep RL and Dopamine

BI ViDA Panel Discussion: Deep RL and Dopamine

Brain Inspired
Brain Inspired
BI ViDA Panel Discussion: Deep RL and Dopamine
Loading
/

What can artificial intelligence teach us about how the brain uses dopamine to learn? Recent advances in artificial intelligence have yielded novel algorithms for reinforcement
learning (RL), which leverage the power of deep learning together with reward prediction error signals in order to achieve unprecedented performance in complex tasks. In the brain, reward prediction error signals are thought to be signaled by midbrain dopamine neurons and support learning. Can these new advances in Deep RL help us understand the role that dopamine plays in learning? In this panel experts in both theoretical and experimental dopamine research will
discuss this question.