BI 128 Hakwan Lau: In Consciousness We Trust

BI 128 Hakwan Lau: In Consciousness We Trust

Brain Inspired
Brain Inspired
BI 128 Hakwan Lau: In Consciousness We Trust
Loading
/

Hakwan and I discuss many of the topics in his new book, In Consciousness we Trust: The Cognitive Neuroscience of Subjective Experience. Hakwan describes his perceptual reality monitoring theory of consciousness, which suggests consciousness may act as a systems check between our sensory perceptions and higher cognitive functions. We also discuss his latest thoughts on mental quality space and how it relates to perceptual reality monitoring. Among many other topics, we chat about the many confounds and challenges to empirically studying consciousness, a topic featured heavily in the first half of his book. Hakwan was on a previous episode with Steve Fleming, BI 099 Hakwan Lau and Steve Fleming: Neuro-AI Consciousness.

BI 127 Tomás Ryan: Memory, Instinct, and Forgetting

BI 127 Tomás Ryan: Memory, Instinct, and Forgetting

Brain Inspired
Brain Inspired
BI 127 Tomás Ryan: Memory, Instinct, and Forgetting
Loading
/

Tomás and I discuss his research and ideas on how memories are encoded (the engram), the role of forgetting, and the overlapping mechanisms of memory and instinct. Tomás uses otpogenetics and other techniques to label and control neurons involved in learning and memory, and has shown that forgotten memories can be restored by stimulating “engram cells” originally associated with the forgotten memory. This line of research has led Tomás to think forgetting might be a learning mechanism itself, a adaption our brains make based on the predictability and affordances of the environment. His work on engrams has also led Tomás to think our instincts (ingrams) may share the same mechanism of our memories (engrams), and that memories may transition to instincts across generations.

BI 126 Randy Gallistel: Where Is the Engram?

BI 126 Randy Gallistel: Where Is the Engram?

Brain Inspired
Brain Inspired
BI 126 Randy Gallistel: Where Is the Engram?
Loading
/

Randy and I discuss his long-standing interest in how the brain stores information to compute. That is, where is the engram, the physical trace of memory in the brain? Modern neuroscience is dominated by the view that memories are stored among synaptic connections in populations of neurons. Randy believes a more reasonable and reliable way to store abstract symbols, like numbers, is to write them into code within individual neurons. Thus, the spiking code, whatever it is, functions to write and read memories into and out of intracellular substrates, like polynucleotides (DNA, RNA, e.g.). He lays out his case in detail in his book with Adam King, Memory and the Computational Brain: Why Cognitive Science will Transform Neuroscience. We also talk about some research and theoretical work since then that support his views.

BI 125 Doris Tsao, Tony Zador, Blake Richards: NAISys

BI 125 Doris Tsao, Tony Zador, Blake Richards: NAISys

Brain Inspired
Brain Inspired
BI 125 Doris Tsao, Tony Zador, Blake Richards: NAISys
Loading
/

Doris, Tony, and Blake are the organizers for this year’s NAISys conference, From Neuroscience to Artificially Intelligent Systems (NAISys), at Cold Spring Harbor. We discuss the conference itself, some history of the neuroscience and AI interface, their current research interests, and a handful of topics around evolution, innateness, development, learning, and the current and future prospects for using neuroscience to inspire new ideas in artificial intelligence.

BI 124 Peter Robin Hiesinger: The Self-Assembling Brain

BI 124 Peter Robin Hiesinger: The Self-Assembling Brain

Brain Inspired
Brain Inspired
BI 124 Peter Robin Hiesinger: The Self-Assembling Brain
Loading
/

Robin and I discuss many of the ideas in his book The Self-Assembling Brain: How Neural Networks Grow Smarter. The premise is that our DNA encodes an algorithmic growth process that unfolds information via time and energy, resulting in a connected neural network (our brains!) imbued with vast amounts of information from the “start”. This contrasts with modern deep learning networks, which start with minimal initial information in their connectivity, and instead rely almost solely on learning to gain their function. Robin suggests we won’t be able to create anything with close to human-like intelligence unless we build in an algorithmic growth process and an evolutionary selection process to create artificial networks.