BI 099 Hakwan Lau and Steve Fleming: Neuro-AI Consciousness

BI 099 Hakwan Lau and Steve Fleming: Neuro-AI Consciousness

Brain Inspired
Brain Inspired
BI 099 Hakwan Lau and Steve Fleming: Neuro-AI Consciousness
Loading
/

Hakwan, Steve, and I discuss many issues around the scientific study of consciousness. Steve and Hakwan focus on higher order theories (HOTs) of consciousness, related to metacognition. So we discuss HOTs in particular and their relation to other approaches/theories, the idea of approaching consciousness as a computational problem to be tackled with computational modeling, we talk about the cultural, social, and career aspects of choosing to study something as elusive and controversial as consciousness, we talk about two of the models they’re working on now to account for various properties of conscious experience, and, of course, the prospects of consciousness in AI. For more on metacognition and awareness, check out episode 73 with Megan Peters.

BI 098 Brian Christian: The Alignment Problem

BI 098 Brian Christian: The Alignment Problem

Brain Inspired
Brain Inspired
BI 098 Brian Christian: The Alignment Problem
Loading
/

Brian and I discuss a range of topics related to his latest book, The Alignment Problem: Machine Learning and Human Values. The alignment problem asks how we can build AI that does what we want it to do, as opposed to building AI that will compromise our own values by accomplishing tasks that may be harmful or dangerous to us. Using some of the stories Brain relates in the book, we talk about:

BI 097 Omri Barak and David Sussillo: Dynamics and Structure

BI 097 Omri Barak and David Sussillo: Dynamics and Structure

Brain Inspired
Brain Inspired
BI 097 Omri Barak and David Sussillo: Dynamics and Structure
Loading
/

Omri, David and I discuss using recurrent neural network models (RNNs) to understand brains and brain function. Omri and David both use dynamical systems theory (DST) to describe how RNNs solve tasks, and to compare the dynamical stucture/landscape/skeleton of RNNs with real neural population recordings. We talk about how their thoughts have evolved since their 2103 Opening the Black Box paper, which began these lines of research and thinking.

BI 096 Keisuke Fukuda and Josh Cosman: Forking Paths

BI 096 Keisuke Fukuda and Josh Cosman: Forking Paths

Brain Inspired
Brain Inspired
BI 096 Keisuke Fukuda and Josh Cosman: Forking Paths
Loading
/

K, Josh, and I were postdocs together in Jeff Schall’s and Geoff Woodman’s labs. K and Josh had backgrounds in psychology and were getting their first experience with neurophysiology, recording single neuron activity in awake behaving primates. This episode is a discussion surrounding their reflections and perspectives on neuroscience and psychology, given their backgrounds and experience (we reference episode 84 with György Buzsáki and David Poeppel). We also talk about their divergent paths – K stayed in academia and runs an EEG lab studying human decision-making and memory, and Josh left academia and has worked for three different pharmaceutical and tech companies. So this episode doesn’t get into gritty science questions, but is a light discussion about the state of neuroscience, psychology, and AI, and reflections on academia and industry, life in lab, and plenty more.

BI 095 Chris Summerfield and Sam Gershman: Neuro for AI?

BI 095 Chris Summerfield and Sam Gershman: Neuro for AI?

Brain Inspired
Brain Inspired
BI 095 Chris Summerfield and Sam Gershman: Neuro for AI?
Loading
/

It’s generally agreed machine learning and AI provide neuroscience with tools for analysis and theoretical principles to test in brains, but there is less agreement about what neuroscience can provide AI. Should computer scientists and engineers care about how brains compute, or will it just slow them down, for example? Chris, Sam, and I discuss how neuroscience might contribute to AI moving forward, considering the past and present. This discussion also leads into related topics, like the role of prediction versus understanding, explainable AI, value alignment, the fundamental conundrum that humans specify the ultimate values of the tasks AI will solve, and more. Plus, a question from previous guest Andrew Saxe.