In this second part of my discussion with Wolfgang (check out the first part), we talk about spiking neural networks in general, principles of brain computation he finds promising for implementing better network models, and we quickly overview some of his recent work on using these principles to build models with biologically plausible learning mechanisms, a spiking network analog of the well-known LSTM recurrent network, and meta-learning using reservoir computing.
In this first part of our conversation, Wolfgang and I discuss the state of theoretical and computational neuroscience, and how experimental results in neuroscience should guide theories and models to understand and explain how brains compute. We also discuss brain-machine interfaces, neuromorphics, and more. In the next part (to be released soon), we discuss principles of brain processing to inform and constrain theories of computations, and we briefly talk about some of his most recent work making spiking neural networks that incorporate some of these brain processing principles.
Nicole and I discuss how a signature for visual memory can be coded among the same population of neurons known to encode object identity, how the same coding scheme arises in convolutional neural networks trained to identify objects, and how neuroscience and machine learning (reinforcement learning) can join forces to understand how curiosity and novelty drive efficient learning.
I speak with Tom Griffiths about his “resource-rational framework”, inspired by Herb Simon’s bounded rationality and Stuart Russel’s bounded optimality concepts. The resource-rational framework illuminates how the constraints of optimizing our available cognition can help us understand what algorithms our brains use to get things done, and can serve as a bridge between Marr’s computational, algorithmic, and implementation levels of understanding. We also talk cognitive prostheses, artificial general intelligence, consciousness, and more.
Thomas and I talk about what happens in the brain’s visual system when you see something versus imagine it. He uses generative encoding and decoding models and brain signals like fMRI and EEG to test the nature of mental imagery. We also discuss the huge fMRI dataset of natural images he’s collected to infer models of the entire visual system, how we’ve still not tapped the potential of fMRI, and more.