Grace and I discuss her new book Models of the Mind, about the blossoming and conceptual foundations of the computational approach to study minds and brains. Each chapter of the book focuses on one major topic and provides historical context, the major concepts that connect models to brain functions, and the current landscape of related research endeavors. We cover a handful of those during the episode, including the birth of AI, the difference between math in physics and neuroscience, determining the neural code and how Shannon information theory plays a role, whether it’s possible to guess a brain function based on what we know about some brain structure, “grand unified theories” of the brain. We also digress and explore topics beyond the book.
Steve and I discuss many topics from his new book Know Thyself: The Science of Self-Awareness. The book covers the full range of what we know about metacognition and self-awareness, including how brains might underlie metacognitive behavior, computational models to explain mechanisms of metacognition, how and why self-awareness evolved, its role and potential origins in theory of mind and social interaction, and how our metacognitive skills develop over our lifetimes. We also discuss what it might look like when we are able to build metacognitive AI, and whether that’s even a good idea.
Jackie and Bob discuss their research and thinking about curiosity. We also discuss how one should go about their career (qua curiosity), how eye movements compare with other windows into cognition, and whether we can and should create curious AI agents (Bob is an emphatic yes, and Jackie is slightly worried that will be the time to worry about AI).
Sanjeev and I discuss some of the progress toward understanding how deep learning works, specially under previous assumptions it wouldn’t or shouldn’t work as well as it does. Deep learning poses a challenge for mathematics, because its methods aren’t rooted in mathematical theory and therefore are a “black box” for math to open. We discuss how Sanjeev thinks optimization, the common framework for thinking of how deep nets learn, is the wrong approach. Instead, a promising alternative focuses on the learning trajectories that occur as a result of different learning algorithms. We discuss two examples of his research to illustrate this: creating deep nets with infinitely large layers (and the networks still find solutions among the infinite possible solutions!), and massively increasing the learning rate during training (the opposite of accepted wisdom, and yet, again, the network finds solutions!). We also discuss his past focus on computational complexity and how he doesn’t share the current neuroscience optimism comparing brains to deep nets.
What is creativity? How do we measure it? How do our brains implement it, and how might AI?Those are some of the questions John, David, and I discuss. The neuroscience of creativity is young, in its “wild west” days still. We talk about a few creativity studies they’ve performed that distinguish different creative processes with respect to different levels of expertise (in this case, in jazz improvisation), and the underlying brain circuits and activity, including using transcranial direct current stimulation to alter the creative process. Related to creativity, we also discuss the phenomenon and neuroscience of insight (the topic of John’s book, The Eureka Factor), unconscious automatic type 1 processes versus conscious deliberate type 2 processes, states of flow, creative process versus creative products, and a lot more.