Mark and Mazviita discuss the philosophy and science of mind, and how to think about computations with respect to understanding minds. Current approaches to explaining brain function are dominated by computational models and the computer metaphor for brain and mind. But there are alternative ways to think about the relation between computations and brain function, which we explore in the discussion. We also talk about the role of philosophy broadly and with respect to mind sciences, pluralism and perspectival approaches to truth and understanding, the prospects and desirability of naturalizing representations (accounting for how brain representations relate to the natural world), and much more.
David and John discuss some of the concepts from their recent paper Two Views on the Cognitive Brain, in which they argue the recent population-based dynamical systems approach is a promising route to understanding brain activity underpinning higher cognition. We discuss mental representations, the kinds of dynamical objects being used for explanation, and much more, including David’s perspectives as a practicing neuroscientist and philosopher.
What can artificial intelligence teach us about how the brain uses dopamine to learn? Recent advances in artificial intelligence have yielded novel algorithms for reinforcement
learning (RL), which leverage the power of deep learning together with reward prediction error signals in order to achieve unprecedented performance in complex tasks. In the brain, reward prediction error signals are thought to be signaled by midbrain dopamine neurons and support learning. Can these new advances in Deep RL help us understand the role that dopamine plays in learning? In this panel experts in both theoretical and experimental dopamine research will
discuss this question.
Ali and Ben discuss the ever-expanding discoveries about the roles dopamine plays for our cognition. Dopamine is known to play a role in learning – dopamine (DA) neurons fire when our reward expectations aren’t met, and that signal helps adjust our expectation. Roughly, DA corresponds to a reward prediction error. The reward prediction error has helped reinforcement learning in AI develop into a raging success, specially with deep reinforcement learning models trained to out-perform humans in games like chess and Go. But DA likely contributes a lot more to brain function. We discuss many of those possible roles, how to think about computation with respect to neuromodulators like DA, how different time and spatial scales interact, and more.
This is the 6th in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. This is the 3rd of 3 in the deep learning series. In this episode, the panelists discuss their experiences with advanced topics in deep learning; unsupervised & self-supervised learning, reinforcement learning, continual learning/causality.