Thomas and I talk about what happens in the brain’s visual system when you see something versus imagine it. He uses generative encoding and decoding models and brain signals like fMRI and EEG to test the nature of mental imagery. We also discuss the huge fMRI dataset of natural images he’s collected to infer models of the entire visual system, how we’ve still not tapped the potential of fMRI, and more.
Kanaka and I discuss a few different ways she uses recurrent neural networks to understand how brains give rise to behaviors. We talk about her work showing how neural circuits transition from active to passive coping behavior in zebrafish, and how RNNs could be used to understand how we switch tasks in general and how we multi-task. Plus the usual fun speculation, advice, and more.
Jon and I discuss understanding the syntax and semantics of language in our brains. He uses linguistic knowledge at the level of sentence and words, neurocomputational models, and neural data like EEG and fMRI to figure out how we process and understand language while listening to the natural language found in everyday conversations and stories. I also get his take on the current state of natural language processing and other AI advances, and how linguistics, neurolinguistics, and AI can contribute to each other.
Andrew and I discuss his work exploring how various facets of deep networks contribute to their function, i.e. deep network theory. We talk about what he’s learned by studying linear deep networks and asking how depth and initial weights affect learning dynamics, when replay is appropriate (and when it’s not), how semantics develop, and what it all might tell us about deep learning in brains.
Jess and I discuss construction using graph neural networks. She makes AI agents that build structures to solve tasks in a simulated blocks and glue world using graph neural networks and deep reinforcement learning. We also discuss her work modeling mental simulation in humans and how it could be implemented in machines, and plenty more.