Galit and I discuss the independent roles of prediction and explanation in scientific models, their history and eventual separation in the philosophy of science, how they can inform each other, and how statisticians like Galit view the current deep learning explosion.
Uri and I discuss his recent perspective that conceives of brains as super-over-parameterized models that try to fit everything as exactly as possible rather than trying to abstract the world into usable models. He was inspired by the way artificial neural networks overfit data when they can, and how evolution works the same way on a much slower timescale.
Stefan and I discuss creativity and constraint in artificial and biological intelligence. We talk about his Asimov Institute and its goal of artificial creativity and constraint, different types and functions of creativity, the neuroscience of creativity and its relation to intelligence, how constraint is an essential factor in all creative processes, and how computational accounts of intelligence may need to be discarded to account for our unique creative abilities.
Jörn, Niko and I continue the discussion of mental representation from last episode with Michael Rescorla, then we discuss their review paper, Peeling The Onion of Brain Representations, about different ways to extract and understand what information is represented in measured brain activity patterns.
Michael and I discuss the philosophy and a bit of history of mental representation including the computational theory of mind and the language of thought hypothesis, how science and philosophy interact, how representation relates to computation in brains and machines, levels of computational explanation, and we discuss some examples of representational approaches to mental processes like bayesian modeling.