BI 148 Gaute Einevoll: Brain Simulations

BI 148 Gaute Einevoll: Brain Simulations

Brain Inspired
Brain Inspired
BI 148 Gaute Einevoll: Brain Simulations
Loading
/

Gaute Einevoll is a professor at the University of Oslo and Norwegian University of Life Sciences. Use develops detailed models of brain networks to use as simulations, so neuroscientists can test their various theories and hypotheses about how networks implement various functions. Thus, the models are tools. The goal is to create models that are multi-level, to test questions at various levels of biological detail; and multi-modal, to predict that handful of signals neuroscientists measure from real brains (something Gaute calls “measurement physics”). We also discuss Gaute’s thoughts on Carina Curto’s “beautiful vs ugly models”, and his reaction to Noah Hutton’s In Silico documentary about the Blue Brain and Human Brain projects (Gaute has been funded by the Human Brain Project since its inception).

BI 147 Noah Hutton: In Silico

BI 147 Noah Hutton: In Silico

Brain Inspired
Brain Inspired
BI 147 Noah Hutton: In Silico
Loading
/

Noah Hutton writes, directs, and scores documentary and narrative films. On this episode, we discuss his documentary In Silico. In 2009, Noah watched a TED talk by Henry Markram, in which Henry claimed it would take 10 years to fully simulate a human brain. This claim inspired Noah to chronicle the project, visiting Henry and his team periodically throughout. The result was In Silico, which tells the science, human, and social story of Henry’s massively funded projects – the Blue Brain Project and the Human Brain Project.

BI 146 Lauren Ross: Causal and Non-Causal Explanation

BI 146 Lauren Ross: Causal and Non-Causal Explanation

Brain Inspired
Brain Inspired
BI 146 Lauren Ross: Causal and Non-Causal Explanation
Loading
/

Lauren Ross is an Associate Professor at the University of California, Irvine. She studies and writes about causal and non-causal explanations in philosophy of science, including distinctions among causal structures. Throughout her work, Lauren employs Jame’s Woodward’s interventionist approach to causation, which Jim and I discussed in episode 145. In this episode, we discuss Jim’s lasting impact on the philosophy of causation, the current dominance of mechanistic explanation and its relation to causation, and various causal structures of explanation, including pathways, cascades, topology, and constraints.

BI 145 James Woodward: Causation with a Human Face

BI 145 James Woodward: Causation with a Human Face

Brain Inspired
Brain Inspired
BI 145 James Woodward: Causation with a Human Face
Loading
/

James Woodward is a recently retired Professor from the Department of History and Philosophy of Science at the University of Pittsburgh. Jim has tremendously influenced the field of causal explanation in the philosophy of science. His account of causation centers around intervention – intervening on a cause should alter its effect. From this minimal notion, Jim has described many facets and varieties of causal structures. In this episode, we discuss topics from his recent book, Causation with a Human Face: Normative Theory and Descriptive Psychology. In the book, Jim advocates that how we should think about causality – the normative – needs to be studied together with how we actually do think about causal relations in the world – the descriptive. We discuss many topics around this central notion, epistemology versus metaphysics, the the nature and varieties of causal structures.

BI 144 Emily M. Bender and Ev Fedorenko: Large Language Models

BI 144 Emily M. Bender and Ev Fedorenko: Large Language Models

Brain Inspired
Brain Inspired
BI 144 Emily M. Bender and Ev Fedorenko: Large Language Models
Loading
/

Large language models, often now called “foundation models”, are the model de jour in AI, based on the transformer architecture. In this episode, I bring together Evelina Fedorenko and Emily M. Bender to discuss how language models stack up to our own language processing and generation (models and brains both excel at next-word prediction), whether language evolved in humans for complex thoughts or for communication (communication, says Ev), whether language models grasp the meaning of the text they produce (Emily says no), and much more.