Jon and I discuss understanding the syntax and semantics of language in our brains. He uses linguistic knowledge at the level of sentence and words, neurocomputational models, and neural data like EEG and fMRI to figure out how we process and understand language while listening to the natural language found in everyday conversations and stories. I also get his take on the current state of natural language processing and other AI advances, and how linguistics, neurolinguistics, and AI can contribute to each other.
Andrew and I discuss his work exploring how various facets of deep networks contribute to their function, i.e. deep network theory. We talk about what he’s learned by studying linear deep networks and asking how depth and initial weights affect learning dynamics, when replay is appropriate (and when it’s not), how semantics develop, and what it all might tell us about deep learning in brains.
Jess and I discuss construction using graph neural networks. She makes AI agents that build structures to solve tasks in a simulated blocks and glue world using graph neural networks and deep reinforcement learning. We also discuss her work modeling mental simulation in humans and how it could be implemented in machines, and plenty more.
Phillip and I discuss his company Brainworks, which uses the latest neuroscience to build AI into its products. We talk about their first product, Ambient Biometrics, that measures vital signs using your smartphone’s camera. We also dive into entrepreneurship in the AI startup world, ethical issues in AI, his early days using neural networks at NASA, where he thinks this is all headed, and more.