All Episodes

BI NMA 04: Deep Learning Basics Panel

BI NMA 04: Deep Learning Basics Panel

Brain Inspired
Brain Inspired
BI NMA 04: Deep Learning Basics Panel
Loading
/

This is the 4th in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. This is the first of 3 in the deep learning series. In this episode, the panelists discuss their experiences with some basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, & regularization.

BI 111 Kevin Mitchell and Erik Hoel: Agency, Emergence, Consciousness

BI 111 Kevin Mitchell and Erik Hoel: Agency, Emergence, Consciousness

Brain Inspired
Brain Inspired
BI 111 Kevin Mitchell and Erik Hoel: Agency, Emergence, Consciousness
Loading
/

Erik, Kevin, and I discuss… well a lot of things.
We talk about both books, then dive deeper into topics like whether brains evolved for moving our bodies vs. consciousness, how information theory is lending insights to emergent phenomena, and the role of agency with respect to what counts as intelligence.

BI NMA 03: Stochastic Processes Panel

BI NMA 03: Stochastic Processes Panel

Brain Inspired
Brain Inspired
BI NMA 03: Stochastic Processes Panel
Loading
/

This is the third in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. In this episode, the panelists discuss their experiences with stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality.

BI NMA 02: Dynamical Systems Panel

BI NMA 02: Dynamical Systems Panel

Brain Inspired
Brain Inspired
BI NMA 02: Dynamical Systems Panel
Loading
/

This is the second in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. In this episode, the panelists discuss their experiences with linear systems, real neurons, and dynamic networks.

BI NMA 01: Machine Learning Panel

BI NMA 01: Machine Learning Panel

Brain Inspired
Brain Inspired
BI NMA 01: Machine Learning Panel
Loading
/

This is the first in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. In this episode, the panelists discuss their experiences with model fitting, GLMs/machine learning, dimensionality reduction, and deep learning.

BI 110 Catherine Stinson and Jessica Thompson: Neuro-AI Explanation

BI 110 Catherine Stinson and Jessica Thompson: Neuro-AI Explanation

Brain Inspired
Brain Inspired
BI 110 Catherine Stinson and Jessica Thompson: Neuro-AI Explanation
Loading
/

Catherine, Jess, and I use some of the ideas from their recent papers to discuss how different types of explanations in neuroscience and AI could be unified into explanations of intelligence, natural or artificial. Catherine has written about how models are related to the target system they are built to explain. She suggests both the model and the target system should be considered as instantiations of a specific kind of phenomenon, and explanation is a product of relating the model and the target system to that specific aspect they both share. Jess has suggested we shift our focus of explanation from objects – like a brain area or a deep learning model – to the shared class of phenomenon performed by those objects. Doing so may help bridge the gap between the different forms of explanation currently used in neuroscience and AI. We also discuss Henk de Regt’s conception of scientific understanding and its relation to explanation (they’re different!), and plenty more.

BI 109 Mark Bickhard: Interactivism

BI 109 Mark Bickhard: Interactivism

Brain Inspired
Brain Inspired
BI 109 Mark Bickhard: Interactivism
Loading
/

Mark and I discuss a wide range of topics surrounding his Interactivism framework for explaining cognition. Interactivism stems from Mark’s account of representations and how what we represent in our minds is related to the external world – a challenge that has plagued the mind-body problem since the beginning. Basically, representations are anticipated interactions with the world, that can be true (if enacting one helps an organism maintain its thermodynamic relation with the world) or false (if it doesn’t). And representations are functional, in that they function to maintain far from equilibrium thermodynamics for the organism for self-maintenance. Over the years, Mark has filled out Interactivism, starting with a process metaphysics foundation and building from there to account for representations, how our brains might implement representations, and why AI is hindered by our modern “encoding” version of representation. We also compare interactivism to other similar frameworks, like enactivism, predictive processing, and the free energy principle

BI 108 Grace Lindsay: Models of the Mind

BI 108 Grace Lindsay: Models of the Mind

Brain Inspired
Brain Inspired
BI 108 Grace Lindsay: Models of the Mind
Loading
/

Grace and I discuss her new book Models of the Mind, about the blossoming and conceptual foundations of the computational approach to study minds and brains. Each chapter of the book focuses on one major topic and provides historical context, the major concepts that connect models to brain functions, and the current landscape of related research endeavors. We cover a handful of those during the episode, including the birth of AI, the difference between math in physics and neuroscience, determining the neural code and how Shannon information theory plays a role, whether it’s possible to guess a brain function based on what we know about some brain structure, “grand unified theories” of the brain. We also digress and explore topics beyond the book.

BI 107 Steve Fleming: Know Thyself

BI 107 Steve Fleming: Know Thyself

Brain Inspired
Brain Inspired
BI 107 Steve Fleming: Know Thyself
Loading
/

Steve and I discuss many topics from his new book Know Thyself: The Science of Self-Awareness. The book covers the full range of what we know about metacognition and self-awareness, including how brains might underlie metacognitive behavior, computational models to explain mechanisms of metacognition, how and why self-awareness evolved, its role and potential origins in theory of mind and social interaction, and how our metacognitive skills develop over our lifetimes. We also discuss what it might look like when we are able to build metacognitive AI, and whether that’s even a good idea.

BI 106 Jacqueline Gottlieb and Robert Wilson: Deep Curiosity

BI 106 Jacqueline Gottlieb and Robert Wilson: Deep Curiosity

Brain Inspired
Brain Inspired
BI 106 Jacqueline Gottlieb and Robert Wilson: Deep Curiosity
Loading
/

Jackie and Bob discuss their research and thinking about curiosity. We also discuss how one should go about their career (qua curiosity), how eye movements compare with other windows into cognition, and whether we can and should create curious AI agents (Bob is an emphatic yes, and Jackie is slightly worried that will be the time to worry about AI).

BI 105 Sanjeev Arora: Off the Convex Path

BI 105 Sanjeev Arora: Off the Convex Path

Brain Inspired
Brain Inspired
BI 105 Sanjeev Arora: Off the Convex Path
Loading
/

Sanjeev and I discuss some of the progress toward understanding how deep learning works, specially under previous assumptions it wouldn’t or shouldn’t work as well as it does. Deep learning poses a challenge for mathematics, because its methods aren’t rooted in mathematical theory and therefore are a “black box” for math to open. We discuss how Sanjeev thinks optimization, the common framework for thinking of how deep nets learn, is the wrong approach. Instead, a promising alternative focuses on the learning trajectories that occur as a result of different learning algorithms. We discuss two examples of his research to illustrate this: creating deep nets with infinitely large layers (and the networks still find solutions among the infinite possible solutions!), and massively increasing the learning rate during training (the opposite of accepted wisdom, and yet, again, the network finds solutions!). We also discuss his past focus on computational complexity and how he doesn’t share the current neuroscience optimism comparing brains to deep nets.

BI 104 John Kounios and David Rosen: Creativity, Expertise, Insight

BI 104 John Kounios and David Rosen: Creativity, Expertise, Insight

Brain Inspired
Brain Inspired
BI 104 John Kounios and David Rosen: Creativity, Expertise, Insight
Loading
/

What is creativity? How do we measure it? How do our brains implement it, and how might AI?Those are some of the questions John, David, and I discuss. The neuroscience of creativity is young, in its “wild west” days still. We talk about a few creativity studies they’ve performed that distinguish different creative processes with respect to different levels of expertise (in this case, in jazz improvisation), and the underlying brain circuits and activity, including using transcranial direct current stimulation to alter the creative process. Related to creativity, we also discuss the phenomenon and neuroscience of insight (the topic of John’s book, The Eureka Factor), unconscious automatic type 1 processes versus conscious deliberate type 2 processes, states of flow, creative process versus creative products, and a lot more.

BI 103 Randal Koene and Ken Hayworth: The Road to Mind Uploading

BI 103 Randal Koene and Ken Hayworth: The Road to Mind Uploading

Brain Inspired
Brain Inspired
BI 103 Randal Koene and Ken Hayworth: The Road to Mind Uploading
Loading
/

Randal, Ken, and I discuss a host of topics around the future goal of uploading our minds into non-brain systems, to continue our mental lives and expand our range of experiences. The basic requirement for such a subtrate-independent mind is to implement whole brain emulation. We discuss two basic approaches to whole brain emulation. The “scan and copy” approach proposes we somehow scan the entire structure of our brains (at whatever scale is necessary) and store that scan until some future date when we have figured out how to us that information to build a substrate that can house your mind. The “gradual replacement” approach proposes we slowly replace parts of the brain with functioning alternative machines, eventually replacing the entire brain with non-biological material and yet retaining a functioning mind.
Randal and Ken are neuroscientists who understand the magnitude and challenges of a massive project like mind uploading, who also understand what we can do right now, with current technology, to advance toward that lofty goal, and who are thoughtful about what steps we need to take to enable further advancements

BI 102 Mark Humphries: What Is It Like To Be A Spike?

BI 102 Mark Humphries: What Is It Like To Be A Spike?

Brain Inspired
Brain Inspired
BI 102 Mark Humphries: What Is It Like To Be A Spike?
Loading
/

Mark and I discuss his book, The Spike: An Epic Journey Through the Brain in 2.1 Seconds. It chronicles how a series of action potentials fire through the brain in a couple seconds of someone’s life. Starting with light hitting the retina as a person looks at a cookie, Mark describes how that light gets translated into spikes, how those spikes get processed in our visual system and eventually transform into motor commands to grab that cookie. Along the way, he describes some of the big ideas throughout the history of studying brains (like the mechanisms to explain how neurons seem to fire so randomly), the big mysteries we currently face (like why do so many neurons do so little?), and some of the main theories to explain those mysteries (we’re prediction machines!). A fun read and discussion.

BI 101 Steve Potter: Motivating Brains In and Out of Dishes

BI 101 Steve Potter: Motivating Brains In and Out of Dishes

Brain Inspired
Brain Inspired
BI 101 Steve Potter: Motivating Brains In and Out of Dishes
Loading
/

Steve and I discuss his book, How to Motivate Your Students to Love Learning, which is both a memoir and a guide for teachers and students to optimize the learning experience for intrinsic motivation. Steve taught neuroscience and engineering courses while running his own lab studying the activity of live cultured neural populations (which we discuss at length in his previous episode). He relentlessly tested and tweaked his teaching methods, including constant feedback from the students, to optimize their learning experiences. He settled on real-world, project-based learning approaches, like writing wikipedia articles and helping groups of students design and carry out their own experiments. We discuss that, plus the science behind learning, principles important for motivating students and maintaining that motivation, and many of the other valuable insights he shares in the book.

BI 100.6 Special: Do We Have the Right Vocabulary and Concepts?

BI 100.6 Special: Do We Have the Right Vocabulary and Concepts?

Brain Inspired
Brain Inspired
BI 100.6 Special: Do We Have the Right Vocabulary and Concepts?
Loading
/

We made it to the last bit of our 100th episode celebration. These have been super fun for me, and I hope you’ve enjoyed the collections as well. If you’re wondering where the missing 5th part is, I reserved it for Brain Inspired’s magnificent Patreon supporters (thanks guys!!!!). The final question I sent to previous guests:
Do we already have the right vocabulary and concepts to explain how brains and minds are related? Why or why not?

BI 100.4 Special: What Ideas Are Holding Us Back?

BI 100.4 Special: What Ideas Are Holding Us Back?

Brain Inspired
Brain Inspired
BI 100.4 Special: What Ideas Are Holding Us Back?
Loading
/

In the 4th installment of our 100th episode celebration, previous guests responded to the question:
What ideas, assumptions, or terms do you think is holding back neuroscience/AI, and why?
As per usual, the responses are varied and wonderful!

BI 100.3 Special: Can We Scale Up to AGI with Current Tech?

BI 100.3 Special: Can We Scale Up to AGI with Current Tech?

Brain Inspired
Brain Inspired
BI 100.3 Special: Can We Scale Up to AGI with Current Tech?
Loading
/

Part 3 in our 100th episode celebration. Previous guests answered the question:
Given the continual surprising progress in AI powered by scaling up parameters and using more compute, while using fairly generic architectures (eg. GPT-3), do you think the current trend of scaling compute can lead to human level AGI? If not, what’s missing?
It likely won’t surprise you that the vast majority answer “No.” It also likely won’t surprise you, there is differing opinion on what’s missing.