BI 127 Tomás Ryan: Memory, Instinct, and Forgetting

BI 127 Tomás Ryan: Memory, Instinct, and Forgetting

Brain Inspired
Brain Inspired
BI 127 Tomás Ryan: Memory, Instinct, and Forgetting
Loading
/

Tomás and I discuss his research and ideas on how memories are encoded (the engram), the role of forgetting, and the overlapping mechanisms of memory and instinct. Tomás uses otpogenetics and other techniques to label and control neurons involved in learning and memory, and has shown that forgotten memories can be restored by stimulating “engram cells” originally associated with the forgotten memory. This line of research has led Tomás to think forgetting might be a learning mechanism itself, a adaption our brains make based on the predictability and affordances of the environment. His work on engrams has also led Tomás to think our instincts (ingrams) may share the same mechanism of our memories (engrams), and that memories may transition to instincts across generations.

BI 126 Randy Gallistel: Where Is the Engram?

BI 126 Randy Gallistel: Where Is the Engram?

Brain Inspired
Brain Inspired
BI 126 Randy Gallistel: Where Is the Engram?
Loading
/

Randy and I discuss his long-standing interest in how the brain stores information to compute. That is, where is the engram, the physical trace of memory in the brain? Modern neuroscience is dominated by the view that memories are stored among synaptic connections in populations of neurons. Randy believes a more reasonable and reliable way to store abstract symbols, like numbers, is to write them into code within individual neurons. Thus, the spiking code, whatever it is, functions to write and read memories into and out of intracellular substrates, like polynucleotides (DNA, RNA, e.g.). He lays out his case in detail in his book with Adam King, Memory and the Computational Brain: Why Cognitive Science will Transform Neuroscience. We also talk about some research and theoretical work since then that support his views.

BI 125 Doris Tsao, Tony Zador, Blake Richards: NAISys

BI 125 Doris Tsao, Tony Zador, Blake Richards: NAISys

Brain Inspired
Brain Inspired
BI 125 Doris Tsao, Tony Zador, Blake Richards: NAISys
Loading
/

Doris, Tony, and Blake are the organizers for this year’s NAISys conference, From Neuroscience to Artificially Intelligent Systems (NAISys), at Cold Spring Harbor. We discuss the conference itself, some history of the neuroscience and AI interface, their current research interests, and a handful of topics around evolution, innateness, development, learning, and the current and future prospects for using neuroscience to inspire new ideas in artificial intelligence.

BI 124 Peter Robin Hiesinger: The Self-Assembling Brain

BI 124 Peter Robin Hiesinger: The Self-Assembling Brain

Brain Inspired
Brain Inspired
BI 124 Peter Robin Hiesinger: The Self-Assembling Brain
Loading
/

Robin and I discuss many of the ideas in his book The Self-Assembling Brain: How Neural Networks Grow Smarter. The premise is that our DNA encodes an algorithmic growth process that unfolds information via time and energy, resulting in a connected neural network (our brains!) imbued with vast amounts of information from the “start”. This contrasts with modern deep learning networks, which start with minimal initial information in their connectivity, and instead rely almost solely on learning to gain their function. Robin suggests we won’t be able to create anything with close to human-like intelligence unless we build in an algorithmic growth process and an evolutionary selection process to create artificial networks.

BI 123 Irina Rish: Continual Learning

BI 123 Irina Rish: Continual Learning

Brain Inspired
Brain Inspired
BI 123 Irina Rish: Continual Learning
Loading
/

Irina is a faculty member at MILA-Quebec AI Institute and a professor at Université de Montréal. She has worked from both ends of the neuroscience/AI interface, using AI for neuroscience applications, and using neural principles to help improve AI. We discuss her work on biologically-plausible alternatives to back-propagation, using “auxiliary variables” in addition to the normal connection weight updates. We also discuss the world of lifelong learning, which seeks to train networks in an online manner to improve on any tasks as they are introduced. Catastrophic forgetting is an obstacle in modern deep learning, where a network forgets old tasks when it is trained on new tasks. Lifelong learning strategies, like continual learning, transfer learning, and meta-learning seek to overcome catastrophic forgetting, and we talk about some of the inspirations from neuroscience being used to help lifelong learning in networks.