<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="https://braininspired.co/wp-content/plugins/seriously-simple-podcasting/templates/feed-stylesheet.xsl"?><rss version="2.0"
	 xmlns:content="http://purl.org/rss/1.0/modules/content/"
	 xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	 xmlns:dc="http://purl.org/dc/elements/1.1/"
	 xmlns:atom="http://www.w3.org/2005/Atom"
	 xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	 xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	 xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd"
	 xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"
	 xmlns:podcast="https://podcastindex.org/namespace/1.0"
	>
		<channel>
		<title>Brain Inspired</title>
		<atom:link href="https://braininspired.co/feed/podcast/brain-inspired/" rel="self" type="application/rss+xml"/>
		<link>https://braininspired.co/series/brain-inspired/</link>
		<description>Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.</description>
		<lastBuildDate>Sun, 29 Mar 2026 16:31:03 +0000</lastBuildDate>
		<language>en-US</language>
		<copyright>© 2019 Brain-Inspired</copyright>
		<itunes:subtitle>Where Neuroscience and AI Converge</itunes:subtitle>
		<itunes:author>Paul Middlebrooks</itunes:author>
		<itunes:type>episodic</itunes:type>
		<itunes:summary>Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.</itunes:summary>
		<itunes:owner>
			<itunes:name>Paul Middlebrooks</itunes:name>
			<itunes:email>paul@braininspired.co</itunes:email>
		</itunes:owner>
		<itunes:explicit>false</itunes:explicit>
		<itunes:image href="https://braininspired.co/wp-content/uploads/2020/06/currentLogoLarge.png"></itunes:image>
			
		<itunes:category text="Science">
			<itunes:category text="Natural Sciences"></itunes:category>
		</itunes:category>
		<itunes:category text="Technology">
							</itunes:category>
		<itunes:category text="Education">
							</itunes:category>
		<googleplay:author><![CDATA[Paul Middlebrooks]]></googleplay:author>
			<googleplay:email>paul@braininspired.co</googleplay:email>			<googleplay:description>Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.</googleplay:description>
			<googleplay:explicit>No</googleplay:explicit>
			<googleplay:image href="https://braininspired.co/wp-content/uploads/2020/06/currentLogoLarge.png"></googleplay:image>
			<podcast:locked owner="paul@braininspired.co">yes</podcast:locked>
		<podcast:funding url="https://www.patreon.com/braininspired">Patreon for Full Episodes, Discord Community, and more</podcast:funding>
		<podcast:guid>35b5a40f-1dbd-5466-9c02-15fc0a170480</podcast:guid>
		
		<!-- podcast_generator="SSP by Castos/3.14.4" Seriously Simple Podcasting plugin for WordPress (https://wordpress.org/plugins/seriously-simple-podcasting/) -->
		<generator>https://wordpress.org/?v=6.9.4</generator>

<item>
	<title>BI 234 Juan Gallego: The Neural Manifold Manifesto</title>
	<link>https://braininspired.co/podcast/234/</link>
	<pubDate>Wed, 25 Mar 2026 04:00:00 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">1fd40477-8dcb-5d0b-90ed-bc16bd6e0994</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Check out this story: <a href="Neural%20manifolds:%20Latest%20buzzword%20or%20pathway%20to%20understand%20the%20brain?">Neural manifolds: Latest buzzword or pathway to understand the brain?</a></p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">Brain Inspired email alerts</a> to be notified every time a new Brain Inspired episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p>Juan Gallego runs the <a href="https://www.fchampalimaud.org/research/groups/gallego-lab">Neocybernetics Lab</a> at the <a href="https://www.fchampalimaud.org/champalimaud-research">Champalimaud Centre for the Unknown</a> in Lisbon, Portugal, affiliated with the neuroscience of disease and neuroscience programs, and the centre for restorative neurotechnology.</p>



<p>Juan has worked a lot on neural manifolds - the mathematical objects neuroscience is using more and more to describe how big populations of neurons coordinate their activity to do useful things. In fact, he recently gave a short talk that he titled The Manifold Manifesto, because he was asked to be provocative. And he was provocative, suggesting that manifolds are real - as real as chairs and tables are, that they have causal power, and they might be a target of evolution. Of course he talked about his own and others work to support those claims. So today we discuss many of those themes, through the lens of his own and others work, and we talk about what keeps him up at night about the possible limits of using manifolds to connect brain activity with behavior and mental phenomena.</p>



<p>He's not just a manifold person, though. Juan is more broadly interested in motor control and how brains do it.</p>



<p>We also discuss his work in patients with spinal cord injuries, who don't have enough nerve connections to their muscles to actually move, but have enough nerve connections that some signal gets through. Juan and his colleagues can detect that little bit getting through, and use it to infer what behaviors the patients intend to do, and they can use that information to control actions in a computer simulation. The hope is that this will translate to controlling prosthetics to give spinal cord injury patients their mobility again.</p>



<ul class="wp-block-list">
<li><a href="https://www.fchampalimaud.org/research/groups/gallego-lab">Neocybernetics Lab</a>.</li>



<li><a href="https://bsky.app/profile/juangallego.bsky.social">@juangallego.bsky.social</a></li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.nature.com/articles/s41593-025-02031-z">A neural manifold view of the brain</a>.</li>



<li><a href="https://www.nature.com/articles/s41467-024-54738-5">A neural implementation model of feedback-based motor learning</a>.</li>



<li><a href="https://www.cell.com/neuron/fulltext/S0896-6273(24)00922-X?uuid=uuid%3Ab00b7eef-9b3b-4791-85f9-6fd6ce0cabc7">Conjoint specification of action by neocortex and striatum</a>.</li>



<li><a href="https://www.sciencedirect.com/science/article/pii/S0959438824000059">Integrating across behaviors and timescales to understand the neural control of movement</a>.</li>



<li><a href="https://www.biorxiv.org/content/biorxiv/early/2026/03/06/2026.03.06.709637.full.pdf">Evolutionarily conserved neural dynamics across mice, monkeys, and humans</a>.</li>
</ul>
</li>
</ul>



<p>Read the <a href="https://www.thetransmitter.org/wp-content/uploads/2026/03/BI-234-transcript-juan-gallego.pdf" target="_blank" rel="noreferrer noopener">transcript</a>.</p>



<p>0:00 - Intro
4:37 - Manifolds
14:30 - Strengths and weaknesses
24:32 - Conserved manifolds across animals and species
34:31 - Causality and manifolds
47:29 - Constraints and causes
51:05 - What to measure
58:55 - Complexity and manifolds
1:10:29 - Juan's background
1:14:08 - Prosthetics for spinal cord injuries
1:41:06 - Integrating across behaviors and timescales
1:46:56 - Conjoint specification of action by neocortex and striatum.</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. ]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Check out this story: <a href="Neural%20manifolds:%20Latest%20buzzword%20or%20pathway%20to%20understand%20the%20brain?">Neural manifolds: Latest buzzword or pathway to understand the brain?</a></p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">Brain Inspired email alerts</a> to be notified every time a new Brain Inspired episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p>Juan Gallego runs the <a href="https://www.fchampalimaud.org/research/groups/gallego-lab">Neocybernetics Lab</a> at the <a href="https://www.fchampalimaud.org/champalimaud-research">Champalimaud Centre for the Unknown</a> in Lisbon, Portugal, affiliated with the neuroscience of disease and neuroscience programs, and the centre for restorative neurotechnology.</p>



<p>Juan has worked a lot on neural manifolds - the mathematical objects neuroscience is using more and more to describe how big populations of neurons coordinate their activity to do useful things. In fact, he recently gave a short talk that he titled The Manifold Manifesto, because he was asked to be provocative. And he was provocative, suggesting that manifolds are real - as real as chairs and tables are, that they have causal power, and they might be a target of evolution. Of course he talked about his own and others work to support those claims. So today we discuss many of those themes, through the lens of his own and others work, and we talk about what keeps him up at night about the possible limits of using manifolds to connect brain activity with behavior and mental phenomena.</p>



<p>He's not just a manifold person, though. Juan is more broadly interested in motor control and how brains do it.</p>



<p>We also discuss his work in patients with spinal cord injuries, who don't have enough nerve connections to their muscles to actually move, but have enough nerve connections that some signal gets through. Juan and his colleagues can detect that little bit getting through, and use it to infer what behaviors the patients intend to do, and they can use that information to control actions in a computer simulation. The hope is that this will translate to controlling prosthetics to give spinal cord injury patients their mobility again.</p>



<ul class="wp-block-list">
<li><a href="https://www.fchampalimaud.org/research/groups/gallego-lab">Neocybernetics Lab</a>.</li>



<li><a href="https://bsky.app/profile/juangallego.bsky.social">@juangallego.bsky.social</a></li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.nature.com/articles/s41593-025-02031-z">A neural manifold view of the brain</a>.</li>



<li><a href="https://www.nature.com/articles/s41467-024-54738-5">A neural implementation model of feedback-based motor learning</a>.</li>



<li><a href="https://www.cell.com/neuron/fulltext/S0896-6273(24)00922-X?uuid=uuid%3Ab00b7eef-9b3b-4791-85f9-6fd6ce0cabc7">Conjoint specification of action by neocortex and striatum</a>.</li>



<li><a href="https://www.sciencedirect.com/science/article/pii/S0959438824000059">Integrating across behaviors and timescales to understand the neural control of movement</a>.</li>



<li><a href="https://www.biorxiv.org/content/biorxiv/early/2026/03/06/2026.03.06.709637.full.pdf">Evolutionarily conserved neural dynamics across mice, monkeys, and humans</a>.</li>
</ul>
</li>
</ul>



<p>Read the <a href="https://www.thetransmitter.org/wp-content/uploads/2026/03/BI-234-transcript-juan-gallego.pdf" target="_blank" rel="noreferrer noopener">transcript</a>.</p>



<p>0:00 - Intro
4:37 - Manifolds
14:30 - Strengths and weaknesses
24:32 - Conserved manifolds across animals and species
34:31 - Causality and manifolds
47:29 - Constraints and causes
51:05 - What to measure
58:55 - Complexity and manifolds
1:10:29 - Juan's background
1:14:08 - Prosthetics for spinal cord injuries
1:41:06 - Integrating across behaviors and timescales
1:46:56 - Conjoint specification of action by neocortex and striatum.</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2576/234.mp3" length="117960832" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Check out this story: Neural manifolds: Latest buzzword or pathway to understand the brain?



Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.



Juan Gallego runs the Neocybernetics Lab at the Champalimaud Centre for the Unknown in Lisbon, Portugal, affiliated with the neuroscience of disease and neuroscience programs, and the centre for restorative neurotechnology.



Juan has worked a lot on neural manifolds - the mathematical objects neuroscience is using more and more to describe how big populations of neurons coordinate their activity to do useful things. In fact, he recently gave a short talk that he titled The Manifold Manifesto, because he was asked to be provocative. And he was provocative, suggesting that manifolds are real - as real as chairs and tables are, that they have causal power, and they might be a target of evolution. Of course he talked about his own and others work to support those claims. So today we discuss many of those themes, through the lens of his own and others work, and we talk about what keeps him up at night about the possible limits of using manifolds to connect brain activity with behavior and mental phenomena.



He's not just a manifold person, though. Juan is more broadly interested in motor control and how brains do it.



We also discuss his work in patients with spinal cord injuries, who don't have enough nerve connections to their muscles to actually move, but have enough nerve connections that some signal gets through. Juan and his colleagues can detect that little bit getting through, and use it to infer what behaviors the patients intend to do, and they can use that information to control actions in a computer simulation. The hope is that this will translate to controlling prosthetics to give spinal cord injury patients their mobility again.




Neocybernetics Lab.



@juangallego.bsky.social



Related papers

A neural manifold view of the brain.



A neural implementation model of feedback-based motor learning.



Conjoint specification of action by neocortex and striatum.



Integrating across behaviors and timescales to understand the neural control of movement.



Evolutionarily conserved neural dynamics across mice, monkeys, and humans.






Read the transcript.



0:00 - Intro
4:37 - Manifolds
14:30 - Strengths and weaknesses
24:32 - Conserved manifolds across animals and species
34:31 - Causality and manifolds
47:29 - Constraints and causes
51:05 - What to measure
58:55 - Complexity and manifolds
1:10:29 - Juan's background
1:14:08 - Prosthetics for spinal cord injuries
1:41:06 - Integrating across behaviors and timescales
1:46:56 - Conjoint specification of action by neocortex and striatum.]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2026/03/thumb-1.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>02:01:31</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Check out this story: Neural manifolds: Latest buzzword or pathway to understand the brain?



Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.



Juan Gallego runs the Neocybernetics Lab at the Champalimaud Centre for the Unknown in Lisbon, Portugal, affiliated with the neuroscience of disease and neuroscience programs, and the centre for restorative neurotechnology.



Juan has worked a lot on neural manifolds - the mathematical objects n]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2026/03/thumb-1.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 233 Tom Griffiths: The Laws of Thought</title>
	<link>https://braininspired.co/podcast/233/</link>
	<pubDate>Wed, 11 Mar 2026 04:00:00 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">5361ebbd-8745-521a-9aec-832bc4b715c5</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">Brain Inspired email alerts</a> to be notified every time a new Brain Inspired episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>





<p>Tom Griffiths directs both the <a href="https://cocosci.princeton.edu/">Computational Cognitive Science Lab</a> and the <a href="https://ai.princeton.edu/ai-lab">Princeton Laboratory for Artificial Intelligence</a> at Princeton University. He's been on brain inspired before to talk about his previous book <a href="https://amzn.to/4s1sRnR">Algorithms to Live By: The Computer Science of Human Decisions</a>, which he co-wrote with Brian Christian. Today he's here to talk about his new book, <a href="https://amzn.to/3O9Dx5j">The Laws of Thought: The Quest for a Mathematical Theory of the Mind</a>. In this book, Tom explains how the three pillars of logic, neural networks, and probability theory complement each other to explain cognition, arguing we are on the doorstep to settling what mathematical principles - the so-called "laws of thought" - underly our cognition. So we discuss a little bit about a lot of things, including the concepts themselves, the people who have generated and worked on those concepts. I should also mentioned, Tom recorded a bunch of his interviews with people he writes about, and he's edited and polished those into a podcast called the Cognition Project, which I have enjoyed after reading the book, and I think you'd enjoy it either before or after you read the book.</p>



<ul class="wp-block-list">
<li><a href="https://cocosci.princeton.edu/">Computational Cognitive Science Lab</a></li>



<li><a href="https://ai.princeton.edu/ai-lab">Princeton Laboratory for Artificial Intelligence</a></li>



<li>Social: <a href="https://x.com/cocosci_lab">@cocosci_lab</a>; <a href="https://bsky.app/profile/did:plc:gopsyxl7h53zecg7o3h5wbot">@cocoscilab.bsky.social</a></li>



<li>Book:
<ul class="wp-block-list">
<li><a href="https://amzn.to/3O9Dx5j">The Laws of Thought: The Quest for a Mathematical Theory of the Mind</a>.</li>
</ul>
</li>



<li><a href="https://open.spotify.com/show/7rhwBGhEQCtO9cBguazFsq">Podcast: The Cognition Project</a></li>
</ul>



<p>Read the <a href="https://www.thetransmitter.org/wp-content/uploads/2026/03/BI-233-transcript-tom-griffiths-final.pdf" target="_blank" rel="noreferrer noopener">transcript</a>.</p>



<p>0:00 - Intro
3:20 - Tom's approach
7:19 - 3 pillars of the laws of thought
28:24 - Logic and formal systems strip away meaning
39:04 - Nature of thought
50:35 - Kahneman and Tversky
1:015:12 - Enabling constraints and inductive bias
1:12:51 - Hidden layers, probability, and hidden markov models
1:20:47 - Conscious vs nonconscious
1:23:43 - Feelings
1:31:26 - Personal</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. ]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">Brain Inspired email alerts</a> to be notified every time a new Brain Inspired episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>





<p>Tom Griffiths directs both the <a href="https://cocosci.princeton.edu/">Computational Cognitive Science Lab</a> and the <a href="https://ai.princeton.edu/ai-lab">Princeton Laboratory for Artificial Intelligence</a> at Princeton University. He's been on brain inspired before to talk about his previous book <a href="https://amzn.to/4s1sRnR">Algorithms to Live By: The Computer Science of Human Decisions</a>, which he co-wrote with Brian Christian. Today he's here to talk about his new book, <a href="https://amzn.to/3O9Dx5j">The Laws of Thought: The Quest for a Mathematical Theory of the Mind</a>. In this book, Tom explains how the three pillars of logic, neural networks, and probability theory complement each other to explain cognition, arguing we are on the doorstep to settling what mathematical principles - the so-called "laws of thought" - underly our cognition. So we discuss a little bit about a lot of things, including the concepts themselves, the people who have generated and worked on those concepts. I should also mentioned, Tom recorded a bunch of his interviews with people he writes about, and he's edited and polished those into a podcast called the Cognition Project, which I have enjoyed after reading the book, and I think you'd enjoy it either before or after you read the book.</p>



<ul class="wp-block-list">
<li><a href="https://cocosci.princeton.edu/">Computational Cognitive Science Lab</a></li>



<li><a href="https://ai.princeton.edu/ai-lab">Princeton Laboratory for Artificial Intelligence</a></li>



<li>Social: <a href="https://x.com/cocosci_lab">@cocosci_lab</a>; <a href="https://bsky.app/profile/did:plc:gopsyxl7h53zecg7o3h5wbot">@cocoscilab.bsky.social</a></li>



<li>Book:
<ul class="wp-block-list">
<li><a href="https://amzn.to/3O9Dx5j">The Laws of Thought: The Quest for a Mathematical Theory of the Mind</a>.</li>
</ul>
</li>



<li><a href="https://open.spotify.com/show/7rhwBGhEQCtO9cBguazFsq">Podcast: The Cognition Project</a></li>
</ul>



<p>Read the <a href="https://www.thetransmitter.org/wp-content/uploads/2026/03/BI-233-transcript-tom-griffiths-final.pdf" target="_blank" rel="noreferrer noopener">transcript</a>.</p>



<p>0:00 - Intro
3:20 - Tom's approach
7:19 - 3 pillars of the laws of thought
28:24 - Logic and formal systems strip away meaning
39:04 - Nature of thought
50:35 - Kahneman and Tversky
1:015:12 - Enabling constraints and inductive bias
1:12:51 - Hidden layers, probability, and hidden markov models
1:20:47 - Conscious vs nonconscious
1:23:43 - Feelings
1:31:26 - Personal</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2573/233.mp3" length="97381433" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.





Tom Griffiths directs both the Computational Cognitive Science Lab and the Princeton Laboratory for Artificial Intelligence at Princeton University. He's been on brain inspired before to talk about his previous book Algorithms to Live By: The Computer Science of Human Decisions, which he co-wrote with Brian Christian. Today he's here to talk about his new book, The Laws of Thought: The Quest for a Mathematical Theory of the Mind. In this book, Tom explains how the three pillars of logic, neural networks, and probability theory complement each other to explain cognition, arguing we are on the doorstep to settling what mathematical principles - the so-called "laws of thought" - underly our cognition. So we discuss a little bit about a lot of things, including the concepts themselves, the people who have generated and worked on those concepts. I should also mentioned, Tom recorded a bunch of his interviews with people he writes about, and he's edited and polished those into a podcast called the Cognition Project, which I have enjoyed after reading the book, and I think you'd enjoy it either before or after you read the book.




Computational Cognitive Science Lab



Princeton Laboratory for Artificial Intelligence



Social: @cocosci_lab; @cocoscilab.bsky.social



Book:

The Laws of Thought: The Quest for a Mathematical Theory of the Mind.





Podcast: The Cognition Project




Read the transcript.



0:00 - Intro
3:20 - Tom's approach
7:19 - 3 pillars of the laws of thought
28:24 - Logic and formal systems strip away meaning
39:04 - Nature of thought
50:35 - Kahneman and Tversky
1:015:12 - Enabling constraints and inductive bias
1:12:51 - Hidden layers, probability, and hidden markov models
1:20:47 - Conscious vs nonconscious
1:23:43 - Feelings
1:31:26 - Personal]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2026/03/thumb.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:40:13</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.





Tom Griffiths directs both the Computational Cognitive Science Lab and the Princeton Laboratory for Artificial Intelligence at Princeton University. He's been on brain inspired before to talk about his previous book Algorithms to Live By: The Computer Science of Human Decisions, which he co-wrote with Brian Christian. Today he's here to talk about his new book, The Laws of Thought: The Q]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2026/03/thumb.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 232 How Should Neuroscience Integrate with Ecological Psychology?</title>
	<link>https://braininspired.co/podcast/232/</link>
	<pubDate>Wed, 25 Feb 2026 05:00:00 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">56a09d6b-d7a1-54e0-9322-573eb9e71c77</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">Brain Inspired email alerts</a> to be notified every time a new Brain Inspired episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p>How does brain activity explain your perceptions and your actions? That's what neuroscientists ask. How does the interaction between brain, body, and environment explain your perceptions and actions? That's what ecological psychologists ask… sometimes leaving the brain out of the equation altogether. These different approaches to perception and action come with different terms, concepts, underlying assumptions, and targets of explanations.</p>



<p>So what happens when neuroscientists are inspired by ecological psychology but don't necessarily want take on, or are ignorant of, the fundamental principles underlying ecological psychology?</p>



<p>This happens all the time, like how AI was "inspired" by the most rudimentary understanding of how brains work, and took terms from neuroscience like neuron, neural network, and so on, as stand-ins for their models. This has in some sense re-defined what people mean by neuron, and neural network, and how they function and how we should think of them.</p>



<p>Modern neuroscience, with better data collecting tools, has taken a turn toward more naturalistic experimental paradigms to study how brains operate in more ecologically valid situations than what has mostly been used in the history of neuroscience - highly controlled tasks and experimental setups that arguably have very little to do with how organisms evolved to interact with the world to do cognitive things.</p>



<p>One problem with this turn is that we neuroscientists don't have ready-made theoretical tools to deal with the less constrained massive amounts of data the new approach affords. This has led some neuroscientists to seek those theoretical concepts elsewhere. One of those places that offers those theoretical tools is ecological psychology, developed by James and Eleanor Gibson in the mid-20th century, and continued since then by many adherents of the concepts introduced by ecological psychology. Those concepts are very specific with regard to how and what to explain regarding perception and action.</p>



<p><a href="https://dewitlab.wordpress.com/">Matthieu de Wit</a> is an associate professor at <a href="https://www.muhlenberg.edu/">Muhlenberg College</a> in Pennsylvania, who runst the ECON Lab, as in Ecological Neuroscience. <a href="https://luishfavela.wixsite.com/luishfavela">Luis Favela</a> is an associate professor at Indiana University. He's been on before to talk about his book <a href="https://amzn.to/3LbSgrI">The Ecological Brain</a>. And <a href="https://www.um.es/mintlab/index.php/about/people/vicente-raja/">Vicente Raja</a> is a research fellow at University of Murcia in Spain, and he's been on before to talk about ecological psychology and neuroscience.</p>



<p>With their deep expertise in ecological psychology, they are keenly interested in how neuroscience write large adopts various facets of ecological psychology. Do neuroscientists have it right? Do they need to have it right? Is there something being lost in translation? How should neuroscientists adopt ecological psychology for an ecological neuroscience? That's what we're discussing today.</p>



<p>More broadly, this is also a story about what it's like doing research that isn't part of the current mainstream approach, in this doing ecological psychology under the long shadow cast by the computational mechanistic neuro-centric dominant paradigm in neuroscience currently.</p>



<ul class="wp-block-list">
<li>Matthieu <a href="https://dewitlab.wordpress.com/">de Wit lab</a>.
<ul class="wp-block-list">
<li><a href="https://bsky.app/profile/did:plc:adcp5ggarbjp74sxmszapmzd">@dewitmm.bsky.social</a></li>
</ul>
</li>



<li><a href="https://luishfavela.wixsite.com/luishfavela">Luis Favela</a>.
<ul class="wp-block-list">
<li><a href="https://amzn.to/3LbSgrI">The Ecological Brain: Unifying the Sciences of Brain, Body, and Environment</a></li>
</ul>
</li>



<li>Vicente Raja
<ul class="wp-block-list">
<li><a href="https://bsky.app/profile/diovicen.bsky.social">@diovicen.bsky.social</a></li>



<li><a href="https://www.um.es/mintlab/index.php/about/people/vicente-raja/">MINT Lab</a>.</li>



<li><a href="https://amzn.to/3VVBxOD">Ecological psychology</a>&nbsp;</li>
</ul>
</li>



<li>Previous episodes:<ul><li><a href="https://braininspired.co/?s=favela">BI 223 Vicente Raja: Ecological Psychology Motifs in Neuroscience</a></li><li><a href="https://braininspired.co/podcast/190/">BI 190 Luis Favela: The Ecological Brain</a></li></ul>
<ul class="wp-block-list">
<li><a href="https://braininspired.co/podcast/213/">BI 213 Representations in Minds and Brains</a></li>
</ul>
</li>
</ul>



<p>Read the <a href="https://www.thetransmitter.org/wp-content/uploads/2026/02/BI-232-transcript-ecological-psychology.pdf" target="_blank" rel="noreferrer noopener">transcript</a>.</p>



<p>0:00 - Intro
8:23 - How Louie, Vicente, and Matthieu know each other
11:16 - Past present and future of relation between neuroscience and ecological psychology
17:02 - Why resistance to integrating neuroscience into ecological psychology?
28:26 - What counts as ecological psychology?
33:32 - Affordances properly understood
40:33 - Ecological information
47:58 - Importance of dynamics
48:59 - What's at stake?
58:27 - Environment intervention
1:16:21 - When ecological neuroscience publishes
1:31:25 - Neuroscientists escape hatch
1:38:04 - Is ecological psychology a theory of everything?</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. ]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">Brain Inspired email alerts</a> to be notified every time a new Brain Inspired episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p>How does brain activity explain your perceptions and your actions? That's what neuroscientists ask. How does the interaction between brain, body, and environment explain your perceptions and actions? That's what ecological psychologists ask… sometimes leaving the brain out of the equation altogether. These different approaches to perception and action come with different terms, concepts, underlying assumptions, and targets of explanations.</p>



<p>So what happens when neuroscientists are inspired by ecological psychology but don't necessarily want take on, or are ignorant of, the fundamental principles underlying ecological psychology?</p>



<p>This happens all the time, like how AI was "inspired" by the most rudimentary understanding of how brains work, and took terms from neuroscience like neuron, neural network, and so on, as stand-ins for their models. This has in some sense re-defined what people mean by neuron, and neural network, and how they function and how we should think of them.</p>



<p>Modern neuroscience, with better data collecting tools, has taken a turn toward more naturalistic experimental paradigms to study how brains operate in more ecologically valid situations than what has mostly been used in the history of neuroscience - highly controlled tasks and experimental setups that arguably have very little to do with how organisms evolved to interact with the world to do cognitive things.</p>



<p>One problem with this turn is that we neuroscientists don't have ready-made theoretical tools to deal with the less constrained massive amounts of data the new approach affords. This has led some neuroscientists to seek those theoretical concepts elsewhere. One of those places that offers those theoretical tools is ecological psychology, developed by James and Eleanor Gibson in the mid-20th century, and continued since then by many adherents of the concepts introduced by ecological psychology. Those concepts are very specific with regard to how and what to explain regarding perception and action.</p>



<p><a href="https://dewitlab.wordpress.com/">Matthieu de Wit</a> is an associate professor at <a href="https://www.muhlenberg.edu/">Muhlenberg College</a> in Pennsylvania, who runst the ECON Lab, as in Ecological Neuroscience. <a href="https://luishfavela.wixsite.com/luishfavela">Luis Favela</a> is an associate professor at Indiana University. He's been on before to talk about his book <a href="https://amzn.to/3LbSgrI">The Ecological Brain</a>. And <a href="https://www.um.es/mintlab/index.php/about/people/vicente-raja/">Vicente Raja</a> is a research fellow at University of Murcia in Spain, and he's been on before to talk about ecological psychology and neuroscience.</p>



<p>With their deep expertise in ecological psychology, they are keenly interested in how neuroscience write large adopts various facets of ecological psychology. Do neuroscientists have it right? Do they need to have it right? Is there something being lost in translation? How should neuroscientists adopt ecological psychology for an ecological neuroscience? That's what we're discussing today.</p>



<p>More broadly, this is also a story about what it's like doing research that isn't part of the current mainstream approach, in this doing ecological psychology under the long shadow cast by the computational mechanistic neuro-centric dominant paradigm in neuroscience currently.</p>



<ul class="wp-block-list">
<li>Matthieu <a href="https://dewitlab.wordpress.com/">de Wit lab</a>.
<ul class="wp-block-list">
<li><a href="https://bsky.app/profile/did:plc:adcp5ggarbjp74sxmszapmzd">@dewitmm.bsky.social</a></li>
</ul>
</li>



<li><a href="https://luishfavela.wixsite.com/luishfavela">Luis Favela</a>.
<ul class="wp-block-list">
<li><a href="https://amzn.to/3LbSgrI">The Ecological Brain: Unifying the Sciences of Brain, Body, and Environment</a></li>
</ul>
</li>



<li>Vicente Raja
<ul class="wp-block-list">
<li><a href="https://bsky.app/profile/diovicen.bsky.social">@diovicen.bsky.social</a></li>



<li><a href="https://www.um.es/mintlab/index.php/about/people/vicente-raja/">MINT Lab</a>.</li>



<li><a href="https://amzn.to/3VVBxOD">Ecological psychology</a>&nbsp;</li>
</ul>
</li>



<li>Previous episodes:<ul><li><a href="https://braininspired.co/?s=favela">BI 223 Vicente Raja: Ecological Psychology Motifs in Neuroscience</a></li><li><a href="https://braininspired.co/podcast/190/">BI 190 Luis Favela: The Ecological Brain</a></li></ul>
<ul class="wp-block-list">
<li><a href="https://braininspired.co/podcast/213/">BI 213 Representations in Minds and Brains</a></li>
</ul>
</li>
</ul>



<p>Read the <a href="https://www.thetransmitter.org/wp-content/uploads/2026/02/BI-232-transcript-ecological-psychology.pdf" target="_blank" rel="noreferrer noopener">transcript</a>.</p>



<p>0:00 - Intro
8:23 - How Louie, Vicente, and Matthieu know each other
11:16 - Past present and future of relation between neuroscience and ecological psychology
17:02 - Why resistance to integrating neuroscience into ecological psychology?
28:26 - What counts as ecological psychology?
33:32 - Affordances properly understood
40:33 - Ecological information
47:58 - Importance of dynamics
48:59 - What's at stake?
58:27 - Environment intervention
1:16:21 - When ecological neuroscience publishes
1:31:25 - Neuroscientists escape hatch
1:38:04 - Is ecological psychology a theory of everything?</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2570/232.mp3" length="109864056" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.



How does brain activity explain your perceptions and your actions? That's what neuroscientists ask. How does the interaction between brain, body, and environment explain your perceptions and actions? That's what ecological psychologists ask… sometimes leaving the brain out of the equation altogether. These different approaches to perception and action come with different terms, concepts, underlying assumptions, and targets of explanations.



So what happens when neuroscientists are inspired by ecological psychology but don't necessarily want take on, or are ignorant of, the fundamental principles underlying ecological psychology?



This happens all the time, like how AI was "inspired" by the most rudimentary understanding of how brains work, and took terms from neuroscience like neuron, neural network, and so on, as stand-ins for their models. This has in some sense re-defined what people mean by neuron, and neural network, and how they function and how we should think of them.



Modern neuroscience, with better data collecting tools, has taken a turn toward more naturalistic experimental paradigms to study how brains operate in more ecologically valid situations than what has mostly been used in the history of neuroscience - highly controlled tasks and experimental setups that arguably have very little to do with how organisms evolved to interact with the world to do cognitive things.



One problem with this turn is that we neuroscientists don't have ready-made theoretical tools to deal with the less constrained massive amounts of data the new approach affords. This has led some neuroscientists to seek those theoretical concepts elsewhere. One of those places that offers those theoretical tools is ecological psychology, developed by James and Eleanor Gibson in the mid-20th century, and continued since then by many adherents of the concepts introduced by ecological psychology. Those concepts are very specific with regard to how and what to explain regarding perception and action.



Matthieu de Wit is an associate professor at Muhlenberg College in Pennsylvania, who runst the ECON Lab, as in Ecological Neuroscience. Luis Favela is an associate professor at Indiana University. He's been on before to talk about his book The Ecological Brain. And Vicente Raja is a research fellow at University of Murcia in Spain, and he's been on before to talk about ecological psychology and neuroscience.



With their deep expertise in ecological psychology, they are keenly interested in how neuroscience write large adopts various facets of ecological psychology. Do neuroscientists have it right? Do they need to have it right? Is there something being lost in translation? How should neuroscientists adopt ecological psychology for an ecological neuroscience? That's what we're discussing today.



More broadly, this is also a story about what it's like doing research that isn't part of the current mainstream approach, in this doing ecological psychology under the long shadow cast by the computational mechanistic neuro-centric dominant paradigm in neuroscience currently.




Matthieu de Wit lab.

@dewitmm.bsky.social





Luis Favela.

The Ecological Brain: Unifying the Sciences of Brain, Body, and Environment





Vicente Raja

@diovicen.bsky.social



MINT Lab.



Ecological psychology&nbsp;





Previous episodes:BI 223 Vicente Raja: Ecological]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2026/02/thumb.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:53:10</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.



How does brain activity explain your perceptions and your actions? That's what neuroscientists ask. How does the interaction between brain, body, and environment explain your perceptions and actions? That's what ecological psychologists ask… sometimes leaving the brain out of the equation altogether. These different approaches to perception and action come with different terms, concepts, u]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2026/02/thumb.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 231 Jaan Aru: Conscious AI? Not Even Close!</title>
	<link>https://braininspired.co/podcast/231/</link>
	<pubDate>Wed, 11 Feb 2026 05:00:00 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">4c204648-d18b-5bfe-88dd-2f52e6eb5d2b</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">Brain Inspired email alerts</a> to be notified every time a new Brain Inspired episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p>Jaan Aru is a co-principal investigator of the Natural and Artificial Intelligence Lab at the University of Tartu in Estonia, where he is an associate professor. Jaan's name has kept popping up on papers I've read over the last few years, sometimes alongside other guests I've had on the podcast, like <a href="https://braininspired.co/podcast/138/">Matthew Larkum</a> and <a href="https://braininspired.co/podcast/121/">Mac Shine</a>. With those people and others, he has co-authored papers exploring how some of the pesky biological details of brains might be important for our subjective conscious experience, details like dendritic integration, and loops between the cortex and the thalamus. Turns out a recurring theme in his work is to connect lower-level nitty gritty biological details with higher level cognitive functioning. And he has some thoughts about what that might mean for the prospects of consciousness in&nbsp; artificial systems. And we also touch on his more recent interest in understanding the brain basis of insight and creativity, connecting some of the more mundane kinds of insights during problem solving, for example, with some of the more profound kinds of insights during mystical and psychedelic experiences, for example.</p>



<ul class="wp-block-list">
<li><a href="https://nail.cs.ut.ee/">Natural &amp; Artificial Intelligence Lab</a></li>



<li>Social: <a href="https://bsky.app/profile/did:plc:pesund73gzmi4skufhb2mtye">@jaanaru.bsky.social</a></li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.sciencedirect.com/science/article/abs/pii/S0166223623002278">The feasibility of artificial consciousness through the lens of neuroscience</a></li>



<li><a href="https://www.sciencedirect.com/science/article/pii/S0149763425005251">On biological and artificial consciousness: A case for biological computationalism</a></li>



<li><a href="https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(20)30175-3">Cellular mechanisms of conscious processing</a>.</li>



<li><a href="https://www.tandfonline.com/doi/abs/10.1080/09515089.2026.2613030">Realization experiences: a convergent account of insight and mystical experiences</a>.</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
4:21 - Jaan's approach
8:51 - Likelihood of machine consciousness
18:58 - Across-levels understanding
30:23 - Intelligence vs consciousness
36:27 - Connecting low-level implementation to cognition
45:42 - Organization and constraints
52:28 - Thalamocortical loops
1:04:18 - Artificial consciousness
1:14:34 - Theories of consciousness
1:23:16 - Creativity and insight
1:37:26 - Science research in Estonia</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. ]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">Brain Inspired email alerts</a> to be notified every time a new Brain Inspired episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p>Jaan Aru is a co-principal investigator of the Natural and Artificial Intelligence Lab at the University of Tartu in Estonia, where he is an associate professor. Jaan's name has kept popping up on papers I've read over the last few years, sometimes alongside other guests I've had on the podcast, like <a href="https://braininspired.co/podcast/138/">Matthew Larkum</a> and <a href="https://braininspired.co/podcast/121/">Mac Shine</a>. With those people and others, he has co-authored papers exploring how some of the pesky biological details of brains might be important for our subjective conscious experience, details like dendritic integration, and loops between the cortex and the thalamus. Turns out a recurring theme in his work is to connect lower-level nitty gritty biological details with higher level cognitive functioning. And he has some thoughts about what that might mean for the prospects of consciousness in&nbsp; artificial systems. And we also touch on his more recent interest in understanding the brain basis of insight and creativity, connecting some of the more mundane kinds of insights during problem solving, for example, with some of the more profound kinds of insights during mystical and psychedelic experiences, for example.</p>



<ul class="wp-block-list">
<li><a href="https://nail.cs.ut.ee/">Natural &amp; Artificial Intelligence Lab</a></li>



<li>Social: <a href="https://bsky.app/profile/did:plc:pesund73gzmi4skufhb2mtye">@jaanaru.bsky.social</a></li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.sciencedirect.com/science/article/abs/pii/S0166223623002278">The feasibility of artificial consciousness through the lens of neuroscience</a></li>



<li><a href="https://www.sciencedirect.com/science/article/pii/S0149763425005251">On biological and artificial consciousness: A case for biological computationalism</a></li>



<li><a href="https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(20)30175-3">Cellular mechanisms of conscious processing</a>.</li>



<li><a href="https://www.tandfonline.com/doi/abs/10.1080/09515089.2026.2613030">Realization experiences: a convergent account of insight and mystical experiences</a>.</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
4:21 - Jaan's approach
8:51 - Likelihood of machine consciousness
18:58 - Across-levels understanding
30:23 - Intelligence vs consciousness
36:27 - Connecting low-level implementation to cognition
45:42 - Organization and constraints
52:28 - Thalamocortical loops
1:04:18 - Artificial consciousness
1:14:34 - Theories of consciousness
1:23:16 - Creativity and insight
1:37:26 - Science research in Estonia</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2568/231.mp3" length="104932933" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.



Jaan Aru is a co-principal investigator of the Natural and Artificial Intelligence Lab at the University of Tartu in Estonia, where he is an associate professor. Jaan's name has kept popping up on papers I've read over the last few years, sometimes alongside other guests I've had on the podcast, like Matthew Larkum and Mac Shine. With those people and others, he has co-authored papers exploring how some of the pesky biological details of brains might be important for our subjective conscious experience, details like dendritic integration, and loops between the cortex and the thalamus. Turns out a recurring theme in his work is to connect lower-level nitty gritty biological details with higher level cognitive functioning. And he has some thoughts about what that might mean for the prospects of consciousness in&nbsp; artificial systems. And we also touch on his more recent interest in understanding the brain basis of insight and creativity, connecting some of the more mundane kinds of insights during problem solving, for example, with some of the more profound kinds of insights during mystical and psychedelic experiences, for example.




Natural &amp; Artificial Intelligence Lab



Social: @jaanaru.bsky.social



Related papers

The feasibility of artificial consciousness through the lens of neuroscience



On biological and artificial consciousness: A case for biological computationalism



Cellular mechanisms of conscious processing.



Realization experiences: a convergent account of insight and mystical experiences.






0:00 - Intro
4:21 - Jaan's approach
8:51 - Likelihood of machine consciousness
18:58 - Across-levels understanding
30:23 - Intelligence vs consciousness
36:27 - Connecting low-level implementation to cognition
45:42 - Organization and constraints
52:28 - Thalamocortical loops
1:04:18 - Artificial consciousness
1:14:34 - Theories of consciousness
1:23:16 - Creativity and insight
1:37:26 - Science research in Estonia]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2026/02/231-Jaan-Aru-thumb.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:48:03</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.



Jaan Aru is a co-principal investigator of the Natural and Artificial Intelligence Lab at the University of Tartu in Estonia, where he is an associate professor. Jaan's name has kept popping up on papers I've read over the last few years, sometimes alongside other guests I've had on the podcast, like Matthew Larkum and Mac Shine. With those people and others, he has co-authored papers expl]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2026/02/231-Jaan-Aru-thumb.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 230 Michael Shadlen: How Thoughts Become Conscious</title>
	<link>https://braininspired.co/podcast/230/</link>
	<pubDate>Wed, 28 Jan 2026 05:00:00 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">e2f51257-47c2-5927-9492-eb63ef940ab1</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">Brain Inspired email alerts</a> to be notified every time a new Brain Inspired episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p>Michael Shadlen is a professor of neuroscience in the Department of Neuroscience at Columbia University, where he's the principle investigator of the Shadlen Lab. If you study the neural basis of decision making, you already know Shadlen's extensive research, because you are constantly referring to it if you're not already in his lab doing the work. The name Shadlen adorns many many papers relating the behavior and neural activity during decision-making to mathematical models in the drift diffusion family of models. That's not the only work he is known for,</p>



<p>As you may have gleaned from those little intro clips, Michael is with me today to discuss his account of what makes a thought conscious, in the hopes to inspire neuroscience research to eventually tackle the hard problem of consciousness - why and how we have subjective experience.</p>



<p>But Mike's account isn't an account of just consciousness. It's an account of nonconscious thought and conscious thought, and how thoughts go from non-conscious to conscious</p>



<p>His account is inspired by multiple sources and lines of reasoning.</p>



<p>Partly, Shadlen refers to philosophical accounts of cognition by people like Marleau-Ponty and James Gibson, appreciating the embodied and ecological aspects of cognition.</p>



<p>And much of his account derives from his own decades of research studying the neural basis of decision-making mostly using perceptual choice tasks where animals make eye movements to report their decisions.</p>



<p>So we discuss some of that, including what we continue to learn about neurobiological, neurophysiological, and anatomical details of brains, and the possibility of AI consciousness, given Shadlen's account.</p>



<ul class="wp-block-list">
<li><a href="https://shadlenlab.zi.columbia.edu/">Shadlen Lab</a>.</li>



<li>Twitter:&nbsp;<a href="https://x.com/shadlen">@shadlen</a>.</li>



<li><a href="https://braininspired.co/wp-content/uploads/2026/01/ShadlenM_Kandel-Ch56_1392-1416.pdf" target="_blank" rel="noreferrer noopener">Decision Making and Consciousness</a> (Chapter in upcoming Principles of Neuroscience textbook).</li>



<li>Talk: <a href="https://www.youtube.com/watch?v=vvvqyUf0BQc">Decision Making as a Model of thought</a></li>
</ul>



<p>Read the <a href="https://www.thetransmitter.org/wp-content/uploads/2026/01/BI-230-transcript-michael-shadlen.pdf" target="_blank" rel="noreferrer noopener">transcript</a>.</p>



<p>0:00 - Intro
7:05 - Overview of Mike's account
9:10 - Thought as interrogation
21:03 - Neurons and thoughts
27:05 - Why so many neurons?
36:21 - Evolution of Mike's thinking
39:48 - Marleau-Ponty, cognition, and meaning
44:54 - Naturalistic tasks
51:11 - Consciousness
58:01 - Martin Buber and relational consciousness
1:00:18 - Social and conscious phenomena correlated
1:04:17 - Function vs. nature of consciousness
1:06:05 - Did language evolve because of consciousness?
1:11:11 - Weak phenomenology and long-range feedback
1:22:02 - How does interrogation work in the brain?
1:26:18 - AI consciousness
1:35:49 - The hard problem of consciousness
1:39:34 - Meditation and flow</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. ]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">Brain Inspired email alerts</a> to be notified every time a new Brain Inspired episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p>Michael Shadlen is a professor of neuroscience in the Department of Neuroscience at Columbia University, where he's the principle investigator of the Shadlen Lab. If you study the neural basis of decision making, you already know Shadlen's extensive research, because you are constantly referring to it if you're not already in his lab doing the work. The name Shadlen adorns many many papers relating the behavior and neural activity during decision-making to mathematical models in the drift diffusion family of models. That's not the only work he is known for,</p>



<p>As you may have gleaned from those little intro clips, Michael is with me today to discuss his account of what makes a thought conscious, in the hopes to inspire neuroscience research to eventually tackle the hard problem of consciousness - why and how we have subjective experience.</p>



<p>But Mike's account isn't an account of just consciousness. It's an account of nonconscious thought and conscious thought, and how thoughts go from non-conscious to conscious</p>



<p>His account is inspired by multiple sources and lines of reasoning.</p>



<p>Partly, Shadlen refers to philosophical accounts of cognition by people like Marleau-Ponty and James Gibson, appreciating the embodied and ecological aspects of cognition.</p>



<p>And much of his account derives from his own decades of research studying the neural basis of decision-making mostly using perceptual choice tasks where animals make eye movements to report their decisions.</p>



<p>So we discuss some of that, including what we continue to learn about neurobiological, neurophysiological, and anatomical details of brains, and the possibility of AI consciousness, given Shadlen's account.</p>



<ul class="wp-block-list">
<li><a href="https://shadlenlab.zi.columbia.edu/">Shadlen Lab</a>.</li>



<li>Twitter:&nbsp;<a href="https://x.com/shadlen">@shadlen</a>.</li>



<li><a href="https://braininspired.co/wp-content/uploads/2026/01/ShadlenM_Kandel-Ch56_1392-1416.pdf" target="_blank" rel="noreferrer noopener">Decision Making and Consciousness</a> (Chapter in upcoming Principles of Neuroscience textbook).</li>



<li>Talk: <a href="https://www.youtube.com/watch?v=vvvqyUf0BQc">Decision Making as a Model of thought</a></li>
</ul>



<p>Read the <a href="https://www.thetransmitter.org/wp-content/uploads/2026/01/BI-230-transcript-michael-shadlen.pdf" target="_blank" rel="noreferrer noopener">transcript</a>.</p>



<p>0:00 - Intro
7:05 - Overview of Mike's account
9:10 - Thought as interrogation
21:03 - Neurons and thoughts
27:05 - Why so many neurons?
36:21 - Evolution of Mike's thinking
39:48 - Marleau-Ponty, cognition, and meaning
44:54 - Naturalistic tasks
51:11 - Consciousness
58:01 - Martin Buber and relational consciousness
1:00:18 - Social and conscious phenomena correlated
1:04:17 - Function vs. nature of consciousness
1:06:05 - Did language evolve because of consciousness?
1:11:11 - Weak phenomenology and long-range feedback
1:22:02 - How does interrogation work in the brain?
1:26:18 - AI consciousness
1:35:49 - The hard problem of consciousness
1:39:34 - Meditation and flow</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2562/230.mp3" length="105636564" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.



Michael Shadlen is a professor of neuroscience in the Department of Neuroscience at Columbia University, where he's the principle investigator of the Shadlen Lab. If you study the neural basis of decision making, you already know Shadlen's extensive research, because you are constantly referring to it if you're not already in his lab doing the work. The name Shadlen adorns many many papers relating the behavior and neural activity during decision-making to mathematical models in the drift diffusion family of models. That's not the only work he is known for,



As you may have gleaned from those little intro clips, Michael is with me today to discuss his account of what makes a thought conscious, in the hopes to inspire neuroscience research to eventually tackle the hard problem of consciousness - why and how we have subjective experience.



But Mike's account isn't an account of just consciousness. It's an account of nonconscious thought and conscious thought, and how thoughts go from non-conscious to conscious



His account is inspired by multiple sources and lines of reasoning.



Partly, Shadlen refers to philosophical accounts of cognition by people like Marleau-Ponty and James Gibson, appreciating the embodied and ecological aspects of cognition.



And much of his account derives from his own decades of research studying the neural basis of decision-making mostly using perceptual choice tasks where animals make eye movements to report their decisions.



So we discuss some of that, including what we continue to learn about neurobiological, neurophysiological, and anatomical details of brains, and the possibility of AI consciousness, given Shadlen's account.




Shadlen Lab.



Twitter:&nbsp;@shadlen.



Decision Making and Consciousness (Chapter in upcoming Principles of Neuroscience textbook).



Talk: Decision Making as a Model of thought




Read the transcript.



0:00 - Intro
7:05 - Overview of Mike's account
9:10 - Thought as interrogation
21:03 - Neurons and thoughts
27:05 - Why so many neurons?
36:21 - Evolution of Mike's thinking
39:48 - Marleau-Ponty, cognition, and meaning
44:54 - Naturalistic tasks
51:11 - Consciousness
58:01 - Martin Buber and relational consciousness
1:00:18 - Social and conscious phenomena correlated
1:04:17 - Function vs. nature of consciousness
1:06:05 - Did language evolve because of consciousness?
1:11:11 - Weak phenomenology and long-range feedback
1:22:02 - How does interrogation work in the brain?
1:26:18 - AI consciousness
1:35:49 - The hard problem of consciousness
1:39:34 - Meditation and flow]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2026/01/thumb.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:48:30</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.



Michael Shadlen is a professor of neuroscience in the Department of Neuroscience at Columbia University, where he's the principle investigator of the Shadlen Lab. If you study the neural basis of decision making, you already know Shadlen's extensive research, because you are constantly referring to it if you're not already in his lab doing the work. The name Shadlen adorns many many papers]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2026/01/thumb.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 229 Tomaso Poggio: Principles of Intelligence and Learning</title>
	<link>https://braininspired.co/podcast/229/</link>
	<pubDate>Wed, 14 Jan 2026 05:00:00 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">014c9575-2cbb-521f-b084-3de5922e248d</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">Brain Inspired email alerts</a> to be notified every time a new Brain Inspired episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p>Tomaso Poggio is the Eugene McDermott professor in the Department of Brain and Cognitive Sciences, an investigator at the McGovern Institute for Brain Research, a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and director of both the Center for Biological and Computational Learning at MIT and the Center for Brains, Minds, and Machines.</p>



<p>Tomaso believes we are in-between building and understanding useful AI That is, we are in between engineering and theory. He likens this stage to the period after Volta invented the battery and Maxwell developed the equations of electromagnetism. Tomaso has worked for decades on the theory and principles behind intelligence and learning in brains and machines. I first learned of him via his work with David Marr, in which they developed "Marr's levels" of analysis that frame explanation in terms of computation/function, algorithms, and implementation. Since then Tomaso has added "learning" as a crucial fourth level. I will refer to you his autobiography to learn more about the many influential people and projects he has worked with and on, the theorems he and others have proved to discover principles of intelligence, and his broader thoughts and reflections.</p>



<p>Right now, he is focused on the principles of compositional sparsity and genericity to explain how deep learning networks can (computationally) efficiently learn useful representations to solve tasks.</p>



<ul class="wp-block-list">
<li><a href="https://poggio-lab.mit.edu/lab/">Lab website</a>.</li>



<li><a href="https://dspace.mit.edu/bitstream/handle/1721.1/70970/mit-csail-tr-2012-014.pdf">Tomaso's Autobiography</a>&nbsp;</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://arxiv.org/abs/2507.02550">Position: A Theory of Deep Learning Must Include Compositional Sparsity</a></li>



<li><a href="https://dspace.mit.edu/bitstream/handle/1721.1/70970/mit-csail-tr-2012-014.pdf">The Levels of Understanding framework, revised</a></li>
</ul>
</li>



<li>Blog post:
<ul class="wp-block-list">
<li><a href="https://poggio-lab.mit.edu/blog/">Poggio lab blog</a>.</li>



<li><a href="https://poggio-lab.mit.edu/the-missing-foundations-of-intelligence/">The Missing Foundations of Intelligence</a></li>
</ul>
</li>
</ul>



<p><a href="https://www.thetransmitter.org/wp-content/uploads/2026/01/BI-229-transcript-tomaso-Poggio.pdf" target="_blank" rel="noreferrer noopener">Read the transcript.</a></p>



<p>0:00 - Intro
9:04 - Learning as the fourth level of Marr's levels
12:34 - Engineering then theory (Volta to Maxwell)
19:23 - Does AI need theory?
26:29 - Learning as the door to intelligence
38:30 - Learning in the brain vs backpropagation
40:45 - Compositional sparsity
49:57 - Math vs computer science
56:50 - Generalizability
1:04:41 - Sparse compositionality in brains?
1:07:33 - Theory vs experiment
1:09:46 - Who needs deep learning theory?
1:19:51 - Does theory really help? Patreon
1:28:54 - Outlook</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. ]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">Brain Inspired email alerts</a> to be notified every time a new Brain Inspired episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p>Tomaso Poggio is the Eugene McDermott professor in the Department of Brain and Cognitive Sciences, an investigator at the McGovern Institute for Brain Research, a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and director of both the Center for Biological and Computational Learning at MIT and the Center for Brains, Minds, and Machines.</p>



<p>Tomaso believes we are in-between building and understanding useful AI That is, we are in between engineering and theory. He likens this stage to the period after Volta invented the battery and Maxwell developed the equations of electromagnetism. Tomaso has worked for decades on the theory and principles behind intelligence and learning in brains and machines. I first learned of him via his work with David Marr, in which they developed "Marr's levels" of analysis that frame explanation in terms of computation/function, algorithms, and implementation. Since then Tomaso has added "learning" as a crucial fourth level. I will refer to you his autobiography to learn more about the many influential people and projects he has worked with and on, the theorems he and others have proved to discover principles of intelligence, and his broader thoughts and reflections.</p>



<p>Right now, he is focused on the principles of compositional sparsity and genericity to explain how deep learning networks can (computationally) efficiently learn useful representations to solve tasks.</p>



<ul class="wp-block-list">
<li><a href="https://poggio-lab.mit.edu/lab/">Lab website</a>.</li>



<li><a href="https://dspace.mit.edu/bitstream/handle/1721.1/70970/mit-csail-tr-2012-014.pdf">Tomaso's Autobiography</a>&nbsp;</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://arxiv.org/abs/2507.02550">Position: A Theory of Deep Learning Must Include Compositional Sparsity</a></li>



<li><a href="https://dspace.mit.edu/bitstream/handle/1721.1/70970/mit-csail-tr-2012-014.pdf">The Levels of Understanding framework, revised</a></li>
</ul>
</li>



<li>Blog post:
<ul class="wp-block-list">
<li><a href="https://poggio-lab.mit.edu/blog/">Poggio lab blog</a>.</li>



<li><a href="https://poggio-lab.mit.edu/the-missing-foundations-of-intelligence/">The Missing Foundations of Intelligence</a></li>
</ul>
</li>
</ul>



<p><a href="https://www.thetransmitter.org/wp-content/uploads/2026/01/BI-229-transcript-tomaso-Poggio.pdf" target="_blank" rel="noreferrer noopener">Read the transcript.</a></p>



<p>0:00 - Intro
9:04 - Learning as the fourth level of Marr's levels
12:34 - Engineering then theory (Volta to Maxwell)
19:23 - Does AI need theory?
26:29 - Learning as the door to intelligence
38:30 - Learning in the brain vs backpropagation
40:45 - Compositional sparsity
49:57 - Math vs computer science
56:50 - Generalizability
1:04:41 - Sparse compositionality in brains?
1:07:33 - Theory vs experiment
1:09:46 - Who needs deep learning theory?
1:19:51 - Does theory really help? Patreon
1:28:54 - Outlook</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2560/229.mp3" length="98338122" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.



Tomaso Poggio is the Eugene McDermott professor in the Department of Brain and Cognitive Sciences, an investigator at the McGovern Institute for Brain Research, a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and director of both the Center for Biological and Computational Learning at MIT and the Center for Brains, Minds, and Machines.



Tomaso believes we are in-between building and understanding useful AI That is, we are in between engineering and theory. He likens this stage to the period after Volta invented the battery and Maxwell developed the equations of electromagnetism. Tomaso has worked for decades on the theory and principles behind intelligence and learning in brains and machines. I first learned of him via his work with David Marr, in which they developed "Marr's levels" of analysis that frame explanation in terms of computation/function, algorithms, and implementation. Since then Tomaso has added "learning" as a crucial fourth level. I will refer to you his autobiography to learn more about the many influential people and projects he has worked with and on, the theorems he and others have proved to discover principles of intelligence, and his broader thoughts and reflections.



Right now, he is focused on the principles of compositional sparsity and genericity to explain how deep learning networks can (computationally) efficiently learn useful representations to solve tasks.




Lab website.



Tomaso's Autobiography&nbsp;



Related papers

Position: A Theory of Deep Learning Must Include Compositional Sparsity



The Levels of Understanding framework, revised





Blog post:

Poggio lab blog.



The Missing Foundations of Intelligence






Read the transcript.



0:00 - Intro
9:04 - Learning as the fourth level of Marr's levels
12:34 - Engineering then theory (Volta to Maxwell)
19:23 - Does AI need theory?
26:29 - Learning as the door to intelligence
38:30 - Learning in the brain vs backpropagation
40:45 - Compositional sparsity
49:57 - Math vs computer science
56:50 - Generalizability
1:04:41 - Sparse compositionality in brains?
1:07:33 - Theory vs experiment
1:09:46 - Who needs deep learning theory?
1:19:51 - Does theory really help? Patreon
1:28:54 - Outlook]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2026/01/web-thumb.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:41:00</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.



Tomaso Poggio is the Eugene McDermott professor in the Department of Brain and Cognitive Sciences, an investigator at the McGovern Institute for Brain Research, a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and director of both the Center for Biological and Computational Learning at MIT and the Center for Brains, Minds, and Machines.



Tomaso believes]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2026/01/web-thumb.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 228 Alex Maier: Laws of Consciousness</title>
	<link>https://braininspired.co/podcast/228/</link>
	<pubDate>Wed, 31 Dec 2025 05:00:00 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">5fd8b445-e2db-565e-86e6-eb6b55f702a4</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">Brain Inspired email alerts</a> to be notified every time a new Brain Inspired episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p>Alex is an associate professor of psychology at Vanderbilt University where he heads the Maier Lab. His work in neuroscience spans vision, visual perception, and cognition, studying the neurophysiology of cortical columns, and other related topics. Today, he is here to discuss where his focus has shifted over the past few years, the neuroscience of consciousness. I should say shifted back, since that was his original love, which you'll hear about.</p>



<p>I've known Alex since my own time at Vanderbilt, where I was a postdoc and he was a new faculty member, and I remember being impressed with him then. I was at a talk he gave - job talk or early talk - where it was immediately obvious how passionate and articulate he is about what he does, and I remember he even showed off some of his telescope photography - good pictures of the moon, I remember. Anyway, we always had fun interactions, even if sometimes it was a quick hello as he ran up stairs and down hallways to get wherever he was going, always in a hurry.</p>



<p>Today we discuss why Alex sees integration information theory as the most viable current prospect for explaining consciousness. That is mainly because IIT has developed a formalized mathematical account that hopes to do for consciousness what other math has done for physics, that is, give us what we know as laws of nature. So basically our discussion revolves around everything related to that, like philosophy of science, distinguishing mathematics from "the mathematical", some of the tools he is finding valuable, like category theory, and some of his work measuring the level of consciousness IIT says a whole soccer team has, not just the individuals that comprise the team.</p>



<ul class="wp-block-list">
<li><a href="https://maierlab.wiki/">Maier Lab</a></li>



<li><a href="https://www.youtube.com/@astonishinghypothesis/featured">Astonishing Hypothesis</a> (Alex's youtube channel)</li>



<li>Twitter:&nbsp;</li>



<li><a href="https://maierav.github.io/sensation/">Sensation and Perception</a> textbook (in-the-making)</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://osf.io/preprints/psyarxiv/cdjpf_v1">Linking the Structure of Neuronal Mechanisms to the Structure of Qualia</a></li>



<li><a href="https://www.sciencedirect.com/science/article/pii/S0960077925016406">Information integration and the latent consciousness of human groups</a></li>



<li><a href="https://arxiv.org/abs/2504.09614">Neural mechanisms of predictive processing: a collaborative community experiment through the OpenScope program</a></li>
</ul>
</li>
</ul>



<ul class="wp-block-list">
<li>Various things Alex mentioned:
<ul class="wp-block-list">
<li><a href="https://www.youtube.com/watch?v=JVDRB0GtHqE">“An Antiphilosophy of Mathematics,” Peter J. Freyd</a> youtube video about "the mathematical".</li>



<li><a href="https://www.youtube.com/playlist?list=PLUl4u3cNGP63bAfjGas3TuA4ZCPUtN6Xf">David Kaiser's playlist on modern physics</a>.</li>
</ul>
</li>



<li>Here's a link to the <a href="https://www.iit.wiki/">Integrated Information Theory Wiki</a>.</li>
</ul>



<p><a href="https://www.thetransmitter.org/wp-content/uploads/2026/01/BI-228-transcript-alex-maier.pdf" target="_blank" rel="noreferrer noopener">Read the transcript.</a></p>



<p>0:00 - Intro
4:27 - Discovering consciousness science
11:23 - Laws of perception
15:48 - Integrated information theory and mathematical formalism
23:54 - Theories of consciousness without math
28:18 - Computation metaphor
34:44 - Formalized mathematics is the way
36:56 - Category theory
41:42 - Structuralism
51:09 - The mathematical
54:33 - Metaphysics of the mathematical
59:52 - Yoneda Lemma
1:12:05 - What's real
1:26:22 - Measuring consciousness of a soccer team
1:35:03 - Assumptions and approximations of IIT
1:43:13 - Open science</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. ]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">Brain Inspired email alerts</a> to be notified every time a new Brain Inspired episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p>Alex is an associate professor of psychology at Vanderbilt University where he heads the Maier Lab. His work in neuroscience spans vision, visual perception, and cognition, studying the neurophysiology of cortical columns, and other related topics. Today, he is here to discuss where his focus has shifted over the past few years, the neuroscience of consciousness. I should say shifted back, since that was his original love, which you'll hear about.</p>



<p>I've known Alex since my own time at Vanderbilt, where I was a postdoc and he was a new faculty member, and I remember being impressed with him then. I was at a talk he gave - job talk or early talk - where it was immediately obvious how passionate and articulate he is about what he does, and I remember he even showed off some of his telescope photography - good pictures of the moon, I remember. Anyway, we always had fun interactions, even if sometimes it was a quick hello as he ran up stairs and down hallways to get wherever he was going, always in a hurry.</p>



<p>Today we discuss why Alex sees integration information theory as the most viable current prospect for explaining consciousness. That is mainly because IIT has developed a formalized mathematical account that hopes to do for consciousness what other math has done for physics, that is, give us what we know as laws of nature. So basically our discussion revolves around everything related to that, like philosophy of science, distinguishing mathematics from "the mathematical", some of the tools he is finding valuable, like category theory, and some of his work measuring the level of consciousness IIT says a whole soccer team has, not just the individuals that comprise the team.</p>



<ul class="wp-block-list">
<li><a href="https://maierlab.wiki/">Maier Lab</a></li>



<li><a href="https://www.youtube.com/@astonishinghypothesis/featured">Astonishing Hypothesis</a> (Alex's youtube channel)</li>



<li>Twitter:&nbsp;</li>



<li><a href="https://maierav.github.io/sensation/">Sensation and Perception</a> textbook (in-the-making)</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://osf.io/preprints/psyarxiv/cdjpf_v1">Linking the Structure of Neuronal Mechanisms to the Structure of Qualia</a></li>



<li><a href="https://www.sciencedirect.com/science/article/pii/S0960077925016406">Information integration and the latent consciousness of human groups</a></li>



<li><a href="https://arxiv.org/abs/2504.09614">Neural mechanisms of predictive processing: a collaborative community experiment through the OpenScope program</a></li>
</ul>
</li>
</ul>



<ul class="wp-block-list">
<li>Various things Alex mentioned:
<ul class="wp-block-list">
<li><a href="https://www.youtube.com/watch?v=JVDRB0GtHqE">“An Antiphilosophy of Mathematics,” Peter J. Freyd</a> youtube video about "the mathematical".</li>



<li><a href="https://www.youtube.com/playlist?list=PLUl4u3cNGP63bAfjGas3TuA4ZCPUtN6Xf">David Kaiser's playlist on modern physics</a>.</li>
</ul>
</li>



<li>Here's a link to the <a href="https://www.iit.wiki/">Integrated Information Theory Wiki</a>.</li>
</ul>



<p><a href="https://www.thetransmitter.org/wp-content/uploads/2026/01/BI-228-transcript-alex-maier.pdf" target="_blank" rel="noreferrer noopener">Read the transcript.</a></p>



<p>0:00 - Intro
4:27 - Discovering consciousness science
11:23 - Laws of perception
15:48 - Integrated information theory and mathematical formalism
23:54 - Theories of consciousness without math
28:18 - Computation metaphor
34:44 - Formalized mathematics is the way
36:56 - Category theory
41:42 - Structuralism
51:09 - The mathematical
54:33 - Metaphysics of the mathematical
59:52 - Yoneda Lemma
1:12:05 - What's real
1:26:22 - Measuring consciousness of a soccer team
1:35:03 - Assumptions and approximations of IIT
1:43:13 - Open science</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2557/228.mp3" length="114461139" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.



Alex is an associate professor of psychology at Vanderbilt University where he heads the Maier Lab. His work in neuroscience spans vision, visual perception, and cognition, studying the neurophysiology of cortical columns, and other related topics. Today, he is here to discuss where his focus has shifted over the past few years, the neuroscience of consciousness. I should say shifted back, since that was his original love, which you'll hear about.



I've known Alex since my own time at Vanderbilt, where I was a postdoc and he was a new faculty member, and I remember being impressed with him then. I was at a talk he gave - job talk or early talk - where it was immediately obvious how passionate and articulate he is about what he does, and I remember he even showed off some of his telescope photography - good pictures of the moon, I remember. Anyway, we always had fun interactions, even if sometimes it was a quick hello as he ran up stairs and down hallways to get wherever he was going, always in a hurry.



Today we discuss why Alex sees integration information theory as the most viable current prospect for explaining consciousness. That is mainly because IIT has developed a formalized mathematical account that hopes to do for consciousness what other math has done for physics, that is, give us what we know as laws of nature. So basically our discussion revolves around everything related to that, like philosophy of science, distinguishing mathematics from "the mathematical", some of the tools he is finding valuable, like category theory, and some of his work measuring the level of consciousness IIT says a whole soccer team has, not just the individuals that comprise the team.




Maier Lab



Astonishing Hypothesis (Alex's youtube channel)



Twitter:&nbsp;



Sensation and Perception textbook (in-the-making)



Related papers

Linking the Structure of Neuronal Mechanisms to the Structure of Qualia



Information integration and the latent consciousness of human groups



Neural mechanisms of predictive processing: a collaborative community experiment through the OpenScope program







Various things Alex mentioned:

“An Antiphilosophy of Mathematics,” Peter J. Freyd youtube video about "the mathematical".



David Kaiser's playlist on modern physics.





Here's a link to the Integrated Information Theory Wiki.




Read the transcript.



0:00 - Intro
4:27 - Discovering consciousness science
11:23 - Laws of perception
15:48 - Integrated information theory and mathematical formalism
23:54 - Theories of consciousness without math
28:18 - Computation metaphor
34:44 - Formalized mathematics is the way
36:56 - Category theory
41:42 - Structuralism
51:09 - The mathematical
54:33 - Metaphysics of the mathematical
59:52 - Yoneda Lemma
1:12:05 - What's real
1:26:22 - Measuring consciousness of a soccer team
1:35:03 - Assumptions and approximations of IIT
1:43:13 - Open science]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2025/12/thumb1.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:57:54</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.



Alex is an associate professor of psychology at Vanderbilt University where he heads the Maier Lab. His work in neuroscience spans vision, visual perception, and cognition, studying the neurophysiology of cortical columns, and other related topics. Today, he is here to discuss where his focus has shifted over the past few years, the neuroscience of consciousness. I should say shifted back,]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2025/12/thumb1.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 227 Decoding Memories: Aspirational Neuroscience 2025</title>
	<link>https://braininspired.co/podcast/227/</link>
	<pubDate>Wed, 17 Dec 2025 05:00:00 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">8f2d2435-1e8b-55ef-a546-bc4e5ba10d30</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">Brain Inspired email alerts</a> to be notified every time a new Brain Inspired episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p>Can you look at all the synaptic connections of a brain, and tell me one nontrivial memory from the organism that has that brain? If so, you shall win the $100,000 prize from the Aspirational Neuroscience group.</p>



<p>I was recently invited for the second time to chair a panel of experts to discuss that question and all the issues around that question - how to decode a non-trivial memory from a static map of synaptic connectivity.</p>



<p>Before I play that recording, let me set the stage a bit more.</p>



<p>Aspirational Neuroscience is a community of neuroscientists run by Kenneth Hayworth, with the goal, from their website, to "balance aspirational thinking with respect to the long-term implications of a successful neuroscience with practical realism about our current state of ignorance and knowledge." One of those aspirations is to decoding things - memories, learned behaviors, and so on - from static connectomes. They hold satellite events at the SfN conference, and invite experts in connectomics from academia and from industry to share their thoughts and progress that might advance that goal.</p>



<p>In this panel discussion, we touch on multiple relevant topics. One question is what is the right experimental design or designs that would answer whether we are decoding memory - what is a benchmark in various model organisms, and for various theoretical frameworks? We discuss some of the obstacles in the way, both technologically and conceptually. Like the fact that proofreading connectome connections - manually verifying and editing them - is a giant bottleneck, or like the very definition of memory, what counts as a memory, let alone a "nontrivial" memory, and so on. And they take lots of questions from the audience as well.</p>



<p>I apologize the audio is not crystal clear in this recording. I did my best to clean it up, and I take full blame for not setting up my audio recorder to capture the best sound. So, if you are a listener, I'd encourage you to check out the video version, which also has subtitles throughout for when the language isn't clear.</p>



<p>Anyway, this is a fun and smart group of people, and I look forward to another one next year I hope.</p>



<p>The last time I did this was episode 180, BI 180, which I link to in the show notes. Before that I had on Ken Hayworth, whom I mentioned runs Aspirational Neuroscience, and Randal Koene, who is on the panel this time. They were on to talk about the future possibility of uploading minds to computers based on connectomes. That was episode 103.</p>



<ul class="wp-block-list">
<li><a href="https://aspirationalneuroscience.org/">Aspirational Neuroscience</a></li>



<li>Panel
<ul class="wp-block-list">
<li><a href="https://scholar.google.com/citations?user=XSjXVbQAAAAJ&amp;hl=en">Michał Januszewski</a><ul><li><a href="https://bsky.app/profile/michalwj.bsky.social">@michalwj.bsky.social</a></li></ul>
<ul class="wp-block-list">
<li>Research scientist (connectomics) with Google Research, automated neural tracing expert</li>
</ul>
</li>



<li><a href="https://alleninstitute.org/person/sven-dorkenwald/">Sven Dorkenwald</a>
<ul class="wp-block-list">
<li><a href="https://bsky.app/profile/sdorkenw.bsky.social">@sdorkenw.bsky.social</a></li>



<li>Research fellow at the Allen Institute, first-author on <a href="https://www.nature.com/articles/s41586-024-07558-y">first full Drosophila connectome</a> paper</li>
</ul>
</li>



<li><a href="https://www.esi-frankfurt.de/people/heleneschmidt/">Helene Schmidt</a><ul><li><a href="https://bsky.app/profile/helenelab.bsky.social">@helenelab.bsky.social</a></li></ul>
<ul class="wp-block-list">
<li>Group leader at Ernst Strungmann Institute, hippocampus connectome &amp; EM expert</li>
</ul>
</li>



<li><a href="https://www.e11.bio/andrew-payne">Andrew Payne</a>
<ul class="wp-block-list">
<li><a href="https://bsky.app/profile/andrewcpayne.bsky.social">@andrewcpayne.bsky.social</a></li>



<li>Founder of <a href="https://www.e11.bio/">E11 Bio</a>, expansion microscopy &amp; viral tracing expert&nbsp;</li>
</ul>
</li>



<li><a href="https://carboncopies.org/About/Team/Bios/RandalKoene/">Randal Koene</a>
<ul class="wp-block-list">
<li>Founder of the <a href="https://carboncopies.org/">Carboncopies Foundation</a>, computational neuroscientist dedicated to the problem of brain emulation.</li>
</ul>
</li>
</ul>
</li>



<li>Related episodes:
<ul class="wp-block-list">
<li><a href="https://braininspired.co/podcast/103/">BI 103 Randal Koene and Ken Hayworth: The Road to Mind Uploading</a></li>



<li><a href="https://braininspired.co/podcast/180/">BI 180 Panel Discussion: Long-term Memory Encoding and Connectome Decoding</a></li>
</ul>
</li>
</ul>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. ]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">Brain Inspired email alerts</a> to be notified every time a new Brain Inspired episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p>Can you look at all the synaptic connections of a brain, and tell me one nontrivial memory from the organism that has that brain? If so, you shall win the $100,000 prize from the Aspirational Neuroscience group.</p>



<p>I was recently invited for the second time to chair a panel of experts to discuss that question and all the issues around that question - how to decode a non-trivial memory from a static map of synaptic connectivity.</p>



<p>Before I play that recording, let me set the stage a bit more.</p>



<p>Aspirational Neuroscience is a community of neuroscientists run by Kenneth Hayworth, with the goal, from their website, to "balance aspirational thinking with respect to the long-term implications of a successful neuroscience with practical realism about our current state of ignorance and knowledge." One of those aspirations is to decoding things - memories, learned behaviors, and so on - from static connectomes. They hold satellite events at the SfN conference, and invite experts in connectomics from academia and from industry to share their thoughts and progress that might advance that goal.</p>



<p>In this panel discussion, we touch on multiple relevant topics. One question is what is the right experimental design or designs that would answer whether we are decoding memory - what is a benchmark in various model organisms, and for various theoretical frameworks? We discuss some of the obstacles in the way, both technologically and conceptually. Like the fact that proofreading connectome connections - manually verifying and editing them - is a giant bottleneck, or like the very definition of memory, what counts as a memory, let alone a "nontrivial" memory, and so on. And they take lots of questions from the audience as well.</p>



<p>I apologize the audio is not crystal clear in this recording. I did my best to clean it up, and I take full blame for not setting up my audio recorder to capture the best sound. So, if you are a listener, I'd encourage you to check out the video version, which also has subtitles throughout for when the language isn't clear.</p>



<p>Anyway, this is a fun and smart group of people, and I look forward to another one next year I hope.</p>



<p>The last time I did this was episode 180, BI 180, which I link to in the show notes. Before that I had on Ken Hayworth, whom I mentioned runs Aspirational Neuroscience, and Randal Koene, who is on the panel this time. They were on to talk about the future possibility of uploading minds to computers based on connectomes. That was episode 103.</p>



<ul class="wp-block-list">
<li><a href="https://aspirationalneuroscience.org/">Aspirational Neuroscience</a></li>



<li>Panel
<ul class="wp-block-list">
<li><a href="https://scholar.google.com/citations?user=XSjXVbQAAAAJ&amp;hl=en">Michał Januszewski</a><ul><li><a href="https://bsky.app/profile/michalwj.bsky.social">@michalwj.bsky.social</a></li></ul>
<ul class="wp-block-list">
<li>Research scientist (connectomics) with Google Research, automated neural tracing expert</li>
</ul>
</li>



<li><a href="https://alleninstitute.org/person/sven-dorkenwald/">Sven Dorkenwald</a>
<ul class="wp-block-list">
<li><a href="https://bsky.app/profile/sdorkenw.bsky.social">@sdorkenw.bsky.social</a></li>



<li>Research fellow at the Allen Institute, first-author on <a href="https://www.nature.com/articles/s41586-024-07558-y">first full Drosophila connectome</a> paper</li>
</ul>
</li>



<li><a href="https://www.esi-frankfurt.de/people/heleneschmidt/">Helene Schmidt</a><ul><li><a href="https://bsky.app/profile/helenelab.bsky.social">@helenelab.bsky.social</a></li></ul>
<ul class="wp-block-list">
<li>Group leader at Ernst Strungmann Institute, hippocampus connectome &amp; EM expert</li>
</ul>
</li>



<li><a href="https://www.e11.bio/andrew-payne">Andrew Payne</a>
<ul class="wp-block-list">
<li><a href="https://bsky.app/profile/andrewcpayne.bsky.social">@andrewcpayne.bsky.social</a></li>



<li>Founder of <a href="https://www.e11.bio/">E11 Bio</a>, expansion microscopy &amp; viral tracing expert&nbsp;</li>
</ul>
</li>



<li><a href="https://carboncopies.org/About/Team/Bios/RandalKoene/">Randal Koene</a>
<ul class="wp-block-list">
<li>Founder of the <a href="https://carboncopies.org/">Carboncopies Foundation</a>, computational neuroscientist dedicated to the problem of brain emulation.</li>
</ul>
</li>
</ul>
</li>



<li>Related episodes:
<ul class="wp-block-list">
<li><a href="https://braininspired.co/podcast/103/">BI 103 Randal Koene and Ken Hayworth: The Road to Mind Uploading</a></li>



<li><a href="https://braininspired.co/podcast/180/">BI 180 Panel Discussion: Long-term Memory Encoding and Connectome Decoding</a></li>
</ul>
</li>
</ul>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2554/227.mp3" length="72408600" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.



Can you look at all the synaptic connections of a brain, and tell me one nontrivial memory from the organism that has that brain? If so, you shall win the $100,000 prize from the Aspirational Neuroscience group.



I was recently invited for the second time to chair a panel of experts to discuss that question and all the issues around that question - how to decode a non-trivial memory from a static map of synaptic connectivity.



Before I play that recording, let me set the stage a bit more.



Aspirational Neuroscience is a community of neuroscientists run by Kenneth Hayworth, with the goal, from their website, to "balance aspirational thinking with respect to the long-term implications of a successful neuroscience with practical realism about our current state of ignorance and knowledge." One of those aspirations is to decoding things - memories, learned behaviors, and so on - from static connectomes. They hold satellite events at the SfN conference, and invite experts in connectomics from academia and from industry to share their thoughts and progress that might advance that goal.



In this panel discussion, we touch on multiple relevant topics. One question is what is the right experimental design or designs that would answer whether we are decoding memory - what is a benchmark in various model organisms, and for various theoretical frameworks? We discuss some of the obstacles in the way, both technologically and conceptually. Like the fact that proofreading connectome connections - manually verifying and editing them - is a giant bottleneck, or like the very definition of memory, what counts as a memory, let alone a "nontrivial" memory, and so on. And they take lots of questions from the audience as well.



I apologize the audio is not crystal clear in this recording. I did my best to clean it up, and I take full blame for not setting up my audio recorder to capture the best sound. So, if you are a listener, I'd encourage you to check out the video version, which also has subtitles throughout for when the language isn't clear.



Anyway, this is a fun and smart group of people, and I look forward to another one next year I hope.



The last time I did this was episode 180, BI 180, which I link to in the show notes. Before that I had on Ken Hayworth, whom I mentioned runs Aspirational Neuroscience, and Randal Koene, who is on the panel this time. They were on to talk about the future possibility of uploading minds to computers based on connectomes. That was episode 103.




Aspirational Neuroscience



Panel

Michał Januszewski@michalwj.bsky.social

Research scientist (connectomics) with Google Research, automated neural tracing expert





Sven Dorkenwald

@sdorkenw.bsky.social



Research fellow at the Allen Institute, first-author on first full Drosophila connectome paper





Helene Schmidt@helenelab.bsky.social

Group leader at Ernst Strungmann Institute, hippocampus connectome &amp; EM expert





Andrew Payne

@andrewcpayne.bsky.social



Founder of E11 Bio, expansion microscopy &amp; viral tracing expert&nbsp;





Randal Koene

Founder of the Carboncopies Foundation, computational neuroscientist dedicated to the problem of brain emulation.







Related episodes:

BI 103 Randal Koene and Ken Hayworth: The Road to Mind Uploading



BI 180 Panel Discussion: Long-term Memory Encoding and Connectome Decoding]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2025/12/ytArtboard-4-copy.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:15:08</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.



Can you look at all the synaptic connections of a brain, and tell me one nontrivial memory from the organism that has that brain? If so, you shall win the $100,000 prize from the Aspirational Neuroscience group.



I was recently invited for the second time to chair a panel of experts to discuss that question and all the issues around that question - how to decode a non-trivial memory from]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2025/12/ytArtboard-4-copy.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 226 Tatiana Engel: The High and Low Dimensional Brain</title>
	<link>https://braininspired.co/podcast/226/</link>
	<pubDate>Wed, 03 Dec 2025 05:00:00 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">84ebd4bc-77d1-52c3-ae25-e6398b77bc87</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">Brain Inspired email alerts</a> to be notified every time a new Brain Inspired episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p>Tatiana Engel runs the Engel lab at Princeton University in the Princeton Neuroscience Institute. She's also part of the <a href="https://www.internationalbrainlab.com/">International Brain Laboratory</a>, a massive across-lab, across-world, collaboration which you'll hear more about. My main impetus for inviting Tatiana was to talk about two projects she's been working on. One of those is connecting the functional dynamics of cognition with the connectivity of the underlying neural networks on which those dynamics unfold. We know the brain is high-dimensional - it has lots of interacting connections, we know the activity of those networks can often be described by lower-dimensional entities called manifolds, and Tatiana and her lab work to connect those two processes with something they call latent circuits. So you'll hear about that, you'll also hear about how the timescales of neurons across the brain are different but the same, why this is cool and surprising, and we discuss many topics around those main topics.&nbsp;</p>



<ul class="wp-block-list">
<li><a href="https://engel-lab.princeton.edu/people/tatiana-engel">Engel Lab</a>.</li>



<li><a href="https://bsky.app/profile/engeltatiana.bsky.social">@engeltatiana.bsky.social</a>.</li>



<li><a href="https://www.internationalbrainlab.com/">International Brain Laboratory</a>.</li>



<li>Related papers:
<ul class="wp-block-list">
<li><a href="https://www.nature.com/articles/s41593-025-01869-7">Latent circuit inference from heterogeneous neural responses during cognitive tasks</a></li>



<li><a href="https://www.nature.com/articles/s41586-025-09199-1">The dynamics and geometry of choice in the premotor cortex</a>.</li>



<li><a href="https://www.nature.com/articles/s41583-023-00693-x">A unifying perspective on neural manifolds and circuits for cognition</a></li>



<li><a href="https://www.biorxiv.org/content/10.1101/2025.08.30.673281v1">Brain-wide organization of intrinsic timescales at single-neuron resolution</a></li>



<li><a href="https://www.nature.com/articles/s42256-025-01127-2">Single-unit activations confer inductive biases for emergent circuit solutions to cognitive tasks</a>.</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
3:03 - No central executive
5:01 - International brain lab
15:57 - Tatiana's background
24:49 - Dynamical systems
17:48 - Manifolds
33:10 - Latent task circuits
47:01 - Mixed selectivity
1:00:21 - Internal and external dynamics
1:03:47 - Modern vs classical modeling
1:14:30 - Intrinsic timescales
1:26:05 - Single trial dynamics
1:29:59 - Future of manifolds</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. ]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">Brain Inspired email alerts</a> to be notified every time a new Brain Inspired episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p>Tatiana Engel runs the Engel lab at Princeton University in the Princeton Neuroscience Institute. She's also part of the <a href="https://www.internationalbrainlab.com/">International Brain Laboratory</a>, a massive across-lab, across-world, collaboration which you'll hear more about. My main impetus for inviting Tatiana was to talk about two projects she's been working on. One of those is connecting the functional dynamics of cognition with the connectivity of the underlying neural networks on which those dynamics unfold. We know the brain is high-dimensional - it has lots of interacting connections, we know the activity of those networks can often be described by lower-dimensional entities called manifolds, and Tatiana and her lab work to connect those two processes with something they call latent circuits. So you'll hear about that, you'll also hear about how the timescales of neurons across the brain are different but the same, why this is cool and surprising, and we discuss many topics around those main topics.&nbsp;</p>



<ul class="wp-block-list">
<li><a href="https://engel-lab.princeton.edu/people/tatiana-engel">Engel Lab</a>.</li>



<li><a href="https://bsky.app/profile/engeltatiana.bsky.social">@engeltatiana.bsky.social</a>.</li>



<li><a href="https://www.internationalbrainlab.com/">International Brain Laboratory</a>.</li>



<li>Related papers:
<ul class="wp-block-list">
<li><a href="https://www.nature.com/articles/s41593-025-01869-7">Latent circuit inference from heterogeneous neural responses during cognitive tasks</a></li>



<li><a href="https://www.nature.com/articles/s41586-025-09199-1">The dynamics and geometry of choice in the premotor cortex</a>.</li>



<li><a href="https://www.nature.com/articles/s41583-023-00693-x">A unifying perspective on neural manifolds and circuits for cognition</a></li>



<li><a href="https://www.biorxiv.org/content/10.1101/2025.08.30.673281v1">Brain-wide organization of intrinsic timescales at single-neuron resolution</a></li>



<li><a href="https://www.nature.com/articles/s42256-025-01127-2">Single-unit activations confer inductive biases for emergent circuit solutions to cognitive tasks</a>.</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
3:03 - No central executive
5:01 - International brain lab
15:57 - Tatiana's background
24:49 - Dynamical systems
17:48 - Manifolds
33:10 - Latent task circuits
47:01 - Mixed selectivity
1:00:21 - Internal and external dynamics
1:03:47 - Modern vs classical modeling
1:14:30 - Intrinsic timescales
1:26:05 - Single trial dynamics
1:29:59 - Future of manifolds</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2552/226.mp3" length="93688785" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.



Tatiana Engel runs the Engel lab at Princeton University in the Princeton Neuroscience Institute. She's also part of the International Brain Laboratory, a massive across-lab, across-world, collaboration which you'll hear more about. My main impetus for inviting Tatiana was to talk about two projects she's been working on. One of those is connecting the functional dynamics of cognition with the connectivity of the underlying neural networks on which those dynamics unfold. We know the brain is high-dimensional - it has lots of interacting connections, we know the activity of those networks can often be described by lower-dimensional entities called manifolds, and Tatiana and her lab work to connect those two processes with something they call latent circuits. So you'll hear about that, you'll also hear about how the timescales of neurons across the brain are different but the same, why this is cool and surprising, and we discuss many topics around those main topics.&nbsp;




Engel Lab.



@engeltatiana.bsky.social.



International Brain Laboratory.



Related papers:

Latent circuit inference from heterogeneous neural responses during cognitive tasks



The dynamics and geometry of choice in the premotor cortex.



A unifying perspective on neural manifolds and circuits for cognition



Brain-wide organization of intrinsic timescales at single-neuron resolution



Single-unit activations confer inductive biases for emergent circuit solutions to cognitive tasks.






0:00 - Intro
3:03 - No central executive
5:01 - International brain lab
15:57 - Tatiana's background
24:49 - Dynamical systems
17:48 - Manifolds
33:10 - Latent task circuits
47:01 - Mixed selectivity
1:00:21 - Internal and external dynamics
1:03:47 - Modern vs classical modeling
1:14:30 - Intrinsic timescales
1:26:05 - Single trial dynamics
1:29:59 - Future of manifolds]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2025/12/web.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:36:18</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.



Tatiana Engel runs the Engel lab at Princeton University in the Princeton Neuroscience Institute. She's also part of the International Brain Laboratory, a massive across-lab, across-world, collaboration which you'll hear more about. My main impetus for inviting Tatiana was to talk about two projects she's been working on. One of those is connecting the functional dynamics of cognition with]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2025/12/web.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 225 Henk De Regt: Understanding in Machines and Humans</title>
	<link>https://braininspired.co/podcast/225/</link>
	<pubDate>Wed, 19 Nov 2025 05:00:00 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">d8ce145b-eae6-5242-91e2-cdada0bea97b</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">Brain Inspired email alerts</a> to be notified every time a new Brain Inspired episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p><a href="https://www.ru.nl/en/people/regt-h-de">Henk de Regt</a> is a professor of Philosophy of Science and the director of the <a href="https://www.ru.nl/en/isis">Institute for Science in Society</a> at Radboud University. Henk wrote the book on Understanding. Literally, he wrote what has become a classic in philosophy of science, <a href="https://amzn.to/4hj6k1X">Understanding Scientific Understanding</a>.</p>





<p>Henks' account of understanding goes roughly like this, but you can learn more in his book and other writings. To claim you understand something in science requires that you can produce a theory-based explanation of whatever you claim to understand, and it depends on you having the right scientific skills to be able to work productively with that theory - for example, making qualitative predictions about it without performing calculations. So understanding is contextual and depends on the skills of the understander.</p>



<p>There's more nuance to it, so like I said you should read the book, but this account of understanding distinguishes it from explanation itself, and distinguishes it from other accounts of understanding, which take understanding to be either a personal subjective sense - that feeling of something clicking in your mind - or simply the addition of more facts about something.</p>



<p>In this conversation, we revisit Henk's work on understanding, and how it touches on many other topics, like realism, the use of metaphors, how public understanding differs from expert understanding, idealization and abstraction in science, and so on.</p>



<p>And, because Henk's kind of understanding doesn't depend on subjective awareness or things being true, he and his cohorts have begun working on whether there could be a benchmark for degrees of understanding, to possibly asses whether AI demonstrates understanding, and to use as a common benchmark for humans and machines.</p>



<ul class="wp-block-list">
<li><a href="https://scholar.google.com/citations?view_op=list_works&amp;hl=en&amp;hl=en&amp;user=sBxqGrsAAAAJ&amp;sortby=pubdate">Google Scholar page</a></li>



<li>Social: <a href="https://bsky.app/profile/did:plc:jjoeezq5qw5ofmqtd5um7svi">@henkderegt.bsky.social</a>; &nbsp;</li>



<li>Book:
<ul class="wp-block-list">
<li><a href="https://amzn.to/4hj6k1X">Understanding Scientific Understanding</a>.</li>
</ul>
</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://link.springer.com/article/10.1007/s11023-024-09657-1">Towards a benchmark for scientific understanding in humans and machines</a></li>



<li><a href="https://www.jbe-platform.com/content/journals/10.1075/msw.22016.sme">Metaphors as tools for understanding in science communication among experts and to the public</a></li>



<li><a href="https://link.springer.com/article/10.1007/s40656-024-00644-4">Two scientific perspectives on nerve signal propagation: how incompatible approaches jointly promote progress in explanatory understanding</a></li>
</ul>
</li>
</ul>



<p>0:00 - Intro
10:13 - Philosophy of explanation vs understanding
14:32 - Different accounts of understanding
20:29 - Henk's account of understanding
26:47 - What counts as intelligible?
34:09 - Hodgkin and Huxley alternative
37:54 - Familiarity vs understanding
44:42 - Measuring understanding
1:02:53 - Machine understanding
1:16:39 - Non-factive understanding
1:23:34 - Abstraction vs understanding
1:31:07 - Public understanding of science
1:41:35 - Reflections on the book</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. ]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">Brain Inspired email alerts</a> to be notified every time a new Brain Inspired episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p><a href="https://www.ru.nl/en/people/regt-h-de">Henk de Regt</a> is a professor of Philosophy of Science and the director of the <a href="https://www.ru.nl/en/isis">Institute for Science in Society</a> at Radboud University. Henk wrote the book on Understanding. Literally, he wrote what has become a classic in philosophy of science, <a href="https://amzn.to/4hj6k1X">Understanding Scientific Understanding</a>.</p>





<p>Henks' account of understanding goes roughly like this, but you can learn more in his book and other writings. To claim you understand something in science requires that you can produce a theory-based explanation of whatever you claim to understand, and it depends on you having the right scientific skills to be able to work productively with that theory - for example, making qualitative predictions about it without performing calculations. So understanding is contextual and depends on the skills of the understander.</p>



<p>There's more nuance to it, so like I said you should read the book, but this account of understanding distinguishes it from explanation itself, and distinguishes it from other accounts of understanding, which take understanding to be either a personal subjective sense - that feeling of something clicking in your mind - or simply the addition of more facts about something.</p>



<p>In this conversation, we revisit Henk's work on understanding, and how it touches on many other topics, like realism, the use of metaphors, how public understanding differs from expert understanding, idealization and abstraction in science, and so on.</p>



<p>And, because Henk's kind of understanding doesn't depend on subjective awareness or things being true, he and his cohorts have begun working on whether there could be a benchmark for degrees of understanding, to possibly asses whether AI demonstrates understanding, and to use as a common benchmark for humans and machines.</p>



<ul class="wp-block-list">
<li><a href="https://scholar.google.com/citations?view_op=list_works&amp;hl=en&amp;hl=en&amp;user=sBxqGrsAAAAJ&amp;sortby=pubdate">Google Scholar page</a></li>



<li>Social: <a href="https://bsky.app/profile/did:plc:jjoeezq5qw5ofmqtd5um7svi">@henkderegt.bsky.social</a>; &nbsp;</li>



<li>Book:
<ul class="wp-block-list">
<li><a href="https://amzn.to/4hj6k1X">Understanding Scientific Understanding</a>.</li>
</ul>
</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://link.springer.com/article/10.1007/s11023-024-09657-1">Towards a benchmark for scientific understanding in humans and machines</a></li>



<li><a href="https://www.jbe-platform.com/content/journals/10.1075/msw.22016.sme">Metaphors as tools for understanding in science communication among experts and to the public</a></li>



<li><a href="https://link.springer.com/article/10.1007/s40656-024-00644-4">Two scientific perspectives on nerve signal propagation: how incompatible approaches jointly promote progress in explanatory understanding</a></li>
</ul>
</li>
</ul>



<p>0:00 - Intro
10:13 - Philosophy of explanation vs understanding
14:32 - Different accounts of understanding
20:29 - Henk's account of understanding
26:47 - What counts as intelligible?
34:09 - Hodgkin and Huxley alternative
37:54 - Familiarity vs understanding
44:42 - Measuring understanding
1:02:53 - Machine understanding
1:16:39 - Non-factive understanding
1:23:34 - Abstraction vs understanding
1:31:07 - Public understanding of science
1:41:35 - Reflections on the book</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2549/225.mp3" length="100485065" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.



Henk de Regt is a professor of Philosophy of Science and the director of the Institute for Science in Society at Radboud University. Henk wrote the book on Understanding. Literally, he wrote what has become a classic in philosophy of science, Understanding Scientific Understanding.





Henks' account of understanding goes roughly like this, but you can learn more in his book and other writings. To claim you understand something in science requires that you can produce a theory-based explanation of whatever you claim to understand, and it depends on you having the right scientific skills to be able to work productively with that theory - for example, making qualitative predictions about it without performing calculations. So understanding is contextual and depends on the skills of the understander.



There's more nuance to it, so like I said you should read the book, but this account of understanding distinguishes it from explanation itself, and distinguishes it from other accounts of understanding, which take understanding to be either a personal subjective sense - that feeling of something clicking in your mind - or simply the addition of more facts about something.



In this conversation, we revisit Henk's work on understanding, and how it touches on many other topics, like realism, the use of metaphors, how public understanding differs from expert understanding, idealization and abstraction in science, and so on.



And, because Henk's kind of understanding doesn't depend on subjective awareness or things being true, he and his cohorts have begun working on whether there could be a benchmark for degrees of understanding, to possibly asses whether AI demonstrates understanding, and to use as a common benchmark for humans and machines.




Google Scholar page



Social: @henkderegt.bsky.social; &nbsp;



Book:

Understanding Scientific Understanding.





Related papers

Towards a benchmark for scientific understanding in humans and machines



Metaphors as tools for understanding in science communication among experts and to the public



Two scientific perspectives on nerve signal propagation: how incompatible approaches jointly promote progress in explanatory understanding






0:00 - Intro
10:13 - Philosophy of explanation vs understanding
14:32 - Different accounts of understanding
20:29 - Henk's account of understanding
26:47 - What counts as intelligible?
34:09 - Hodgkin and Huxley alternative
37:54 - Familiarity vs understanding
44:42 - Measuring understanding
1:02:53 - Machine understanding
1:16:39 - Non-factive understanding
1:23:34 - Abstraction vs understanding
1:31:07 - Public understanding of science
1:41:35 - Reflections on the book]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2025/11/web-thumb-1.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:43:30</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.



Henk de Regt is a professor of Philosophy of Science and the director of the Institute for Science in Society at Radboud University. Henk wrote the book on Understanding. Literally, he wrote what has become a classic in philosophy of science, Understanding Scientific Understanding.





Henks' account of understanding goes roughly like this, but you can learn more in his book and other wri]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2025/11/web-thumb-1.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 224 Dan Nicholson: Schrödinger&#8217;s What is Life? Revisited</title>
	<link>https://braininspired.co/podcast/224/</link>
	<pubDate>Wed, 05 Nov 2025 10:39:13 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">1f4cbe88-faf2-53bc-a482-c5cd8c1f97c0</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">Brain Inspired email alerts</a> to be notified every time a new Brain Inspired episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>





<p>My guest today is Dan Nicholson, Assistant Professor of Philosophy at George Mason University, here to talk about his little book, <a href="https://www.cambridge.org/core/books/what-is-life-revisited/E6B3EA136720CF50C9480ADB8F41A6F4">What Is Life? Revisited</a>. Erwin Schrödinger's What Is Life is a famous book that people point to as having predicted DNA and influenced and inspired many well-known biologists ushering in the molecular biology revolution. But Schrödinger was a physicist, not a biologist, and he spent very little time and effort toward understanding biology.</p>



<p>What was he up to, why did he write this "famous little book"? Schrödinger had an agenda, a physics agenda. He wanted to save the older deterministic version of quantum physics from the new indeterministic version. When Dan was on the podcast a few years ago, we talked about the machine view of biological systems, how everything has become a "mechanism", and how that view fails to capture what modern science is actually telling us, that organisms are unlike machines in important ways. That work of Dan's led him down this path to Schrödinger's What Is Life, which he argues was a major contributor to that machine metaphor so ubiquitous today in biology. One of the reasons I'm interested in this kind of work is because the cognitive sciences, including neuroscience and artificial intelligence, inherited this mechanistic perspective, and swallowed it so hard that if you don't include the word "mechanism" in your research paper, you're vastly decreasing your chances of getting your work published, when in fact the mechanistic perspective is one super useful perspective among many.</p>



<ul class="wp-block-list">
<li><a href="https://philosophy.gmu.edu/people/dnicho">Dan’s website</a>. <a href="https://scholar.google.com/citations?hl=en&amp;user=5gxpRPYAAAAJ&amp;view_op=list_works&amp;sortby=pubdate">Google Scholar</a>.</li>



<li>Social: <a href="https://twitter.com/NicholsonHPBio">@NicholsonHPBio</a>; <a href="https://bsky.app/profile/djnicholson.bsky.social">@djnicholson.bsky.social</a></li>



<li><a href="https://www.dropbox.com/scl/fi/xe79f7jgks4rirpsv7t58/Nicholson-2025-What-Is-Life-Revisited.pdf?rlkey=4orqyk437gq7ne2y428jq2tlx&amp;dl=1" target="_blank" rel="noreferrer noopener">What Is Life? Revisited</a></li>



<li>Previous episode:
<ul class="wp-block-list">
<li><a href="https://braininspired.co/podcast/150/">BI 150 Dan Nicholson: Machines, Organisms, Processes</a></li>
</ul>
</li>
</ul>



<p><a href="https://www.thetransmitter.org/wp-content/uploads/2025/11/BI-224-transcript-Nicholson.pdf" target="_blank" rel="noreferrer noopener">Read the transcript.</a></p>



<p>0:00 - Intro
7:27 - Why Schrodinger wrote What is Life
15:13 - Aperiodic crystal and the meaning of code
21:39 - Order-from-order, order-from-disorder
28:32 - Appeal to authority
37:48 - Cell as machine
39:33 - Relation between DNA and organism (development)
44:44 - Negentropy
53:54 - Original contributions
58:54 - Mechanistic metaphor in neuroscience
1:16:05 - What's the lesson?
1:28:06 - Historical sleuthing
1:39:49 - Modern philosophy of biology</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. ]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">Brain Inspired email alerts</a> to be notified every time a new Brain Inspired episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>





<p>My guest today is Dan Nicholson, Assistant Professor of Philosophy at George Mason University, here to talk about his little book, <a href="https://www.cambridge.org/core/books/what-is-life-revisited/E6B3EA136720CF50C9480ADB8F41A6F4">What Is Life? Revisited</a>. Erwin Schrödinger's What Is Life is a famous book that people point to as having predicted DNA and influenced and inspired many well-known biologists ushering in the molecular biology revolution. But Schrödinger was a physicist, not a biologist, and he spent very little time and effort toward understanding biology.</p>



<p>What was he up to, why did he write this "famous little book"? Schrödinger had an agenda, a physics agenda. He wanted to save the older deterministic version of quantum physics from the new indeterministic version. When Dan was on the podcast a few years ago, we talked about the machine view of biological systems, how everything has become a "mechanism", and how that view fails to capture what modern science is actually telling us, that organisms are unlike machines in important ways. That work of Dan's led him down this path to Schrödinger's What Is Life, which he argues was a major contributor to that machine metaphor so ubiquitous today in biology. One of the reasons I'm interested in this kind of work is because the cognitive sciences, including neuroscience and artificial intelligence, inherited this mechanistic perspective, and swallowed it so hard that if you don't include the word "mechanism" in your research paper, you're vastly decreasing your chances of getting your work published, when in fact the mechanistic perspective is one super useful perspective among many.</p>



<ul class="wp-block-list">
<li><a href="https://philosophy.gmu.edu/people/dnicho">Dan’s website</a>. <a href="https://scholar.google.com/citations?hl=en&amp;user=5gxpRPYAAAAJ&amp;view_op=list_works&amp;sortby=pubdate">Google Scholar</a>.</li>



<li>Social: <a href="https://twitter.com/NicholsonHPBio">@NicholsonHPBio</a>; <a href="https://bsky.app/profile/djnicholson.bsky.social">@djnicholson.bsky.social</a></li>



<li><a href="https://www.dropbox.com/scl/fi/xe79f7jgks4rirpsv7t58/Nicholson-2025-What-Is-Life-Revisited.pdf?rlkey=4orqyk437gq7ne2y428jq2tlx&amp;dl=1" target="_blank" rel="noreferrer noopener">What Is Life? Revisited</a></li>



<li>Previous episode:
<ul class="wp-block-list">
<li><a href="https://braininspired.co/podcast/150/">BI 150 Dan Nicholson: Machines, Organisms, Processes</a></li>
</ul>
</li>
</ul>



<p><a href="https://www.thetransmitter.org/wp-content/uploads/2025/11/BI-224-transcript-Nicholson.pdf" target="_blank" rel="noreferrer noopener">Read the transcript.</a></p>



<p>0:00 - Intro
7:27 - Why Schrodinger wrote What is Life
15:13 - Aperiodic crystal and the meaning of code
21:39 - Order-from-order, order-from-disorder
28:32 - Appeal to authority
37:48 - Cell as machine
39:33 - Relation between DNA and organism (development)
44:44 - Negentropy
53:54 - Original contributions
58:54 - Mechanistic metaphor in neuroscience
1:16:05 - What's the lesson?
1:28:06 - Historical sleuthing
1:39:49 - Modern philosophy of biology</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2543/224.mp3" length="105790417" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.





My guest today is Dan Nicholson, Assistant Professor of Philosophy at George Mason University, here to talk about his little book, What Is Life? Revisited. Erwin Schrödinger's What Is Life is a famous book that people point to as having predicted DNA and influenced and inspired many well-known biologists ushering in the molecular biology revolution. But Schrödinger was a physicist, not a biologist, and he spent very little time and effort toward understanding biology.



What was he up to, why did he write this "famous little book"? Schrödinger had an agenda, a physics agenda. He wanted to save the older deterministic version of quantum physics from the new indeterministic version. When Dan was on the podcast a few years ago, we talked about the machine view of biological systems, how everything has become a "mechanism", and how that view fails to capture what modern science is actually telling us, that organisms are unlike machines in important ways. That work of Dan's led him down this path to Schrödinger's What Is Life, which he argues was a major contributor to that machine metaphor so ubiquitous today in biology. One of the reasons I'm interested in this kind of work is because the cognitive sciences, including neuroscience and artificial intelligence, inherited this mechanistic perspective, and swallowed it so hard that if you don't include the word "mechanism" in your research paper, you're vastly decreasing your chances of getting your work published, when in fact the mechanistic perspective is one super useful perspective among many.




Dan’s website. Google Scholar.



Social: @NicholsonHPBio; @djnicholson.bsky.social



What Is Life? Revisited



Previous episode:

BI 150 Dan Nicholson: Machines, Organisms, Processes






Read the transcript.



0:00 - Intro
7:27 - Why Schrodinger wrote What is Life
15:13 - Aperiodic crystal and the meaning of code
21:39 - Order-from-order, order-from-disorder
28:32 - Appeal to authority
37:48 - Cell as machine
39:33 - Relation between DNA and organism (development)
44:44 - Negentropy
53:54 - Original contributions
58:54 - Mechanistic metaphor in neuroscience
1:16:05 - What's the lesson?
1:28:06 - Historical sleuthing
1:39:49 - Modern philosophy of biology]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2025/11/web-thumb.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:49:02</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.





My guest today is Dan Nicholson, Assistant Professor of Philosophy at George Mason University, here to talk about his little book, What Is Life? Revisited. Erwin Schrödinger's What Is Life is a famous book that people point to as having predicted DNA and influenced and inspired many well-known biologists ushering in the molecular biology revolution. But Schrödinger was a physicist, not a]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2025/11/web-thumb.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 223 Vicente Raja: Ecological Psychology Motifs in Neuroscience</title>
	<link>https://braininspired.co/podcast/223/</link>
	<pubDate>Wed, 22 Oct 2025 04:00:00 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2539</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Vicente Raja is a research fellow at University of Murcia in Spain, where he is also part of the Minimal Intelligence Lab run by Paco Cavo, where they study plant behavior, and he is external affiliate faculty of the Rotman Institute of Philosophy at Western University. He is a philosopher, and he is a cognitive scientist, and he specializes in applying concepts from ecological psychology to understand how brains, and organisms, including plants, get about in the world.</p>



<p>We talk about many facets of his research, both philosophical and scientific, and maybe the best way to describe the conversation is a tour among many of the concepts in ecological psychology - like affordances, ecological information, direct perception, and resonance, and how those concepts do and don't, and should or shouldn’t, contribute to our understanding of brains and minds.</p>



<p>We also discuss Vicente's use of the term motif to describe scientific concepts that allow different researches to study roughly the same things even though they have different definitions for those things, and toward the end we touch on his work studying plant behavior.</p>





<ul class="wp-block-list">
<li><a href="https://www.um.es/mintlab/index.php/about/people/vicente-raja/">MINT Lab</a>.</li>



<li>Book: <a href="https://amzn.to/3VVBxOD" target="_blank" rel="noreferrer noopener">Ecological psychology</a></li>



<li>Social: <a href="https://bsky.app/profile/did:plc:cta6lto5xobzzbu67nohfcsk">@diovicen.bsky.social</a></li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://arxiv.org/pdf/2206.04603">In search for an alternative to the computer metaphor of the mind and brain</a></li>



<li><a href="https://link.springer.com/article/10.1007/s11097-020-09711-0">Embodiment and cognitive neuroscience: the forgotten tales</a>.</li>



<li><a href="https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/ejn.16434">The motifs of radical embodied neuroscience</a></li>



<li><a href="https://www.nature.com/articles/s41598-020-76588-z">The Dynamics of Plant Nutation</a></li>



<li><a href="https://www.researchgate.net/publication/395300333_Ecological_Resonance_Is_Reflected_in_Human_Brain_Activity">Ecological Resonance Is Reflected in Human Brain Activity</a></li>



<li><a href="https://www.researchgate.net/publication/395502859_Affordances_are_for_life_and_not_just_for_maximizing_reproductive_fitness">Affordances are for life (and not just for maximizing reproductive fitness)</a></li>



<li><a href="https://www.researchgate.net/publication/382608399_Two_species_of_realism?_tp=eyJjb250ZXh0Ijp7ImZpcnN0UGFnZSI6InByb2ZpbGUiLCJwYWdlIjoicHJvZmlsZSJ9fQ">Two species of realism</a></li>
</ul>
</li>



<li>Lots of previous guests and topics mentioned:
<ul class="wp-block-list">
<li><a href="https://braininspired.co/podcast/152/">BI 152 Michael L. Anderson: After Phrenology: Neural Reuse</a></li>



<li><a href="https://braininspired.co/podcast/190/">BI 190 Luis Favela: The Ecological Brain</a></li>



<li><a href="https://braininspired.co/podcast/191/">BI 191 Damian Kelty-Stephen: Fractal Turbulent Cascading Intelligence</a></li>
</ul>
</li>
</ul>



<p>Read the&nbsp;<a href="https://www.thetransmitter.org/wp-content/uploads/2025/10/BI-223-transcript-Raja.pdf" target="_blank" rel="noreferrer noopener">transcript</a>.</p>



<p>0:00 - Intro
4:55 - Affordances and neuroscience
13:46 - Motifs
39:41- Reconciling neuroscience and ecological psychology
1:07:55 - Predictive processing
1:15:32 - Resonance
1:23:00 - Biggest holes in ecological psychology
1:29:50 - Plant cognition</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Vicente Raja is a research fellow at University of Murcia in Spain, where he is also part of the Minimal Intelligence Lab run by Paco Cavo, where they study pla]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Vicente Raja is a research fellow at University of Murcia in Spain, where he is also part of the Minimal Intelligence Lab run by Paco Cavo, where they study plant behavior, and he is external affiliate faculty of the Rotman Institute of Philosophy at Western University. He is a philosopher, and he is a cognitive scientist, and he specializes in applying concepts from ecological psychology to understand how brains, and organisms, including plants, get about in the world.</p>



<p>We talk about many facets of his research, both philosophical and scientific, and maybe the best way to describe the conversation is a tour among many of the concepts in ecological psychology - like affordances, ecological information, direct perception, and resonance, and how those concepts do and don't, and should or shouldn’t, contribute to our understanding of brains and minds.</p>



<p>We also discuss Vicente's use of the term motif to describe scientific concepts that allow different researches to study roughly the same things even though they have different definitions for those things, and toward the end we touch on his work studying plant behavior.</p>





<ul class="wp-block-list">
<li><a href="https://www.um.es/mintlab/index.php/about/people/vicente-raja/">MINT Lab</a>.</li>



<li>Book: <a href="https://amzn.to/3VVBxOD" target="_blank" rel="noreferrer noopener">Ecological psychology</a></li>



<li>Social: <a href="https://bsky.app/profile/did:plc:cta6lto5xobzzbu67nohfcsk">@diovicen.bsky.social</a></li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://arxiv.org/pdf/2206.04603">In search for an alternative to the computer metaphor of the mind and brain</a></li>



<li><a href="https://link.springer.com/article/10.1007/s11097-020-09711-0">Embodiment and cognitive neuroscience: the forgotten tales</a>.</li>



<li><a href="https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/ejn.16434">The motifs of radical embodied neuroscience</a></li>



<li><a href="https://www.nature.com/articles/s41598-020-76588-z">The Dynamics of Plant Nutation</a></li>



<li><a href="https://www.researchgate.net/publication/395300333_Ecological_Resonance_Is_Reflected_in_Human_Brain_Activity">Ecological Resonance Is Reflected in Human Brain Activity</a></li>



<li><a href="https://www.researchgate.net/publication/395502859_Affordances_are_for_life_and_not_just_for_maximizing_reproductive_fitness">Affordances are for life (and not just for maximizing reproductive fitness)</a></li>



<li><a href="https://www.researchgate.net/publication/382608399_Two_species_of_realism?_tp=eyJjb250ZXh0Ijp7ImZpcnN0UGFnZSI6InByb2ZpbGUiLCJwYWdlIjoicHJvZmlsZSJ9fQ">Two species of realism</a></li>
</ul>
</li>



<li>Lots of previous guests and topics mentioned:
<ul class="wp-block-list">
<li><a href="https://braininspired.co/podcast/152/">BI 152 Michael L. Anderson: After Phrenology: Neural Reuse</a></li>



<li><a href="https://braininspired.co/podcast/190/">BI 190 Luis Favela: The Ecological Brain</a></li>



<li><a href="https://braininspired.co/podcast/191/">BI 191 Damian Kelty-Stephen: Fractal Turbulent Cascading Intelligence</a></li>
</ul>
</li>
</ul>



<p>Read the&nbsp;<a href="https://www.thetransmitter.org/wp-content/uploads/2025/10/BI-223-transcript-Raja.pdf" target="_blank" rel="noreferrer noopener">transcript</a>.</p>



<p>0:00 - Intro
4:55 - Affordances and neuroscience
13:46 - Motifs
39:41- Reconciling neuroscience and ecological psychology
1:07:55 - Predictive processing
1:15:32 - Resonance
1:23:00 - Biggest holes in ecological psychology
1:29:50 - Plant cognition</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2539/223.mp3" length="95908136" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Vicente Raja is a research fellow at University of Murcia in Spain, where he is also part of the Minimal Intelligence Lab run by Paco Cavo, where they study plant behavior, and he is external affiliate faculty of the Rotman Institute of Philosophy at Western University. He is a philosopher, and he is a cognitive scientist, and he specializes in applying concepts from ecological psychology to understand how brains, and organisms, including plants, get about in the world.



We talk about many facets of his research, both philosophical and scientific, and maybe the best way to describe the conversation is a tour among many of the concepts in ecological psychology - like affordances, ecological information, direct perception, and resonance, and how those concepts do and don't, and should or shouldn’t, contribute to our understanding of brains and minds.



We also discuss Vicente's use of the term motif to describe scientific concepts that allow different researches to study roughly the same things even though they have different definitions for those things, and toward the end we touch on his work studying plant behavior.






MINT Lab.



Book: Ecological psychology



Social: @diovicen.bsky.social



Related papers

In search for an alternative to the computer metaphor of the mind and brain



Embodiment and cognitive neuroscience: the forgotten tales.



The motifs of radical embodied neuroscience



The Dynamics of Plant Nutation



Ecological Resonance Is Reflected in Human Brain Activity



Affordances are for life (and not just for maximizing reproductive fitness)



Two species of realism





Lots of previous guests and topics mentioned:

BI 152 Michael L. Anderson: After Phrenology: Neural Reuse



BI 190 Luis Favela: The Ecological Brain



BI 191 Damian Kelty-Stephen: Fractal Turbulent Cascading Intelligence






Read the&nbsp;transcript.



0:00 - Intro
4:55 - Affordances and neuroscience
13:46 - Motifs
39:41- Reconciling neuroscience and ecological psychology
1:07:55 - Predictive processing
1:15:32 - Resonance
1:23:00 - Biggest holes in ecological psychology
1:29:50 - Plant cognition]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2025/10/223-Vicente-Raja-public.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:39:01</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Vicente Raja is a research fellow at University of Murcia in Spain, where he is also part of the Minimal Intelligence Lab run by Paco Cavo, where they study plant behavior, and he is external affiliate faculty of the Rotman Institute of Philosophy at Western University. He is a philosopher, and he is a cognitive scientist, and he specializes in applying concepts from ecological psychology to understand how brains, and organisms, including plants, get about in the world.



We talk about many facets of his research, both philosophical and scientific, and maybe the best way to describe the conversation is a tour among many of the concepts in ecological psychology - like affordances, ecological information, direct perception, and resonance, and how those concepts do and don't, and should or shouldn’t, contribute to our understanding of brains and minds.



We also discuss Vicente's use of the te]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2025/10/223-Vicente-Raja-public.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 222 Nikolay Kukushkin: Minds and Meaning from Nature&#8217;s Ideas</title>
	<link>https://braininspired.co/podcast/222/</link>
	<pubDate>Wed, 08 Oct 2025 04:00:00 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2536</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">Brain Inspired email alerts</a> to be notified every time a new Brain Inspired episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p>Nikolay Kukushkin is an associate professor at New York University, and a senior scientist at Thomas Carew’s laboratory at the Center for Neural Science. He describes himself as a "molecular philosopher", owing to his day job as a molecular biologist and his broad perspective on how it "hangs together", in the words of Wilfrid Sellers, who in 1962 wrote, “The aim of philosophy, abstractly formulated, is to understand how things in the broadest possible sense of the term hang together in the broadest possible sense of the term”.</p>



<p>That is what Niko does in his book <a href="https://amzn.to/46WT6UO">One Hand Clapping: Unraveling the Mystery of the Human Mind</a>.</p>



<p>This book is about essences across spatial scales in nature. More precisely, it's about giving names to what is fundamental, or essential, to how things and processes function in nature. Niko argues those essences are where meaning resides. That's very abstract, and we'll spell it out more during the discussion. But as an example at the small scale, the essences of carbon and oxygen, respectively, are creation and destruction, which allows metabolism to occur in biological organisms. Moving way up the scale, following this essence perspective leads Niko to the conclusion that there is no separation between our minds and the world, and that instead we should embrace the relational aspect of mind and world as a unifying principle. On the way, via evolution, we discuss many more examples, plus some of his own work studying how memory works in individual cells, not just neurons or populations of neurons in brains.</p>



<ul class="wp-block-list">
<li><a href="https://www.nikolaykukushkin.com/about">Niko's website</a>.</li>



<li>Twitter:&nbsp;<a href="https://x.com/niko_kukushkin">@niko_kukushkin</a>.</li>



<li>Book:
<ul class="wp-block-list">
<li><a href="https://amzn.to/46WT6UO">One Hand Clapping: Unraveling the Mystery of the Human Mind</a></li>
</ul>
</li>
</ul>



<p><a href="https://www.thetransmitter.org/wp-content/uploads/2025/10/BI-222-transcript-kukushkin.pdf" target="_blank" rel="noreferrer noopener">Read the transcript</a>.</p>



<p>0:00 - Intro
9:28 - Studying memory in cells
10:14 - Who the book is for
17:57 - Studying memory in cells
21:53 - What is memory?
29:49 - Book
29:52 - How the book came about
37:56 - Central message of the book
44:07 - Meaning in nature
49:09 - Meaning and essence
51:55 - Multicellularity and ant colonies
57:43 - Eukaryotes and complexification
1:03:38 - Why do we have brains?
1:06:17 - Emergence
1:10:58 - Language
1:12:41 - Human evolution
1:14:41 - Artificial intelligence, meaning and essences
1:25:49 - Consciousness</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. ]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">Brain Inspired email alerts</a> to be notified every time a new Brain Inspired episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p>Nikolay Kukushkin is an associate professor at New York University, and a senior scientist at Thomas Carew’s laboratory at the Center for Neural Science. He describes himself as a "molecular philosopher", owing to his day job as a molecular biologist and his broad perspective on how it "hangs together", in the words of Wilfrid Sellers, who in 1962 wrote, “The aim of philosophy, abstractly formulated, is to understand how things in the broadest possible sense of the term hang together in the broadest possible sense of the term”.</p>



<p>That is what Niko does in his book <a href="https://amzn.to/46WT6UO">One Hand Clapping: Unraveling the Mystery of the Human Mind</a>.</p>



<p>This book is about essences across spatial scales in nature. More precisely, it's about giving names to what is fundamental, or essential, to how things and processes function in nature. Niko argues those essences are where meaning resides. That's very abstract, and we'll spell it out more during the discussion. But as an example at the small scale, the essences of carbon and oxygen, respectively, are creation and destruction, which allows metabolism to occur in biological organisms. Moving way up the scale, following this essence perspective leads Niko to the conclusion that there is no separation between our minds and the world, and that instead we should embrace the relational aspect of mind and world as a unifying principle. On the way, via evolution, we discuss many more examples, plus some of his own work studying how memory works in individual cells, not just neurons or populations of neurons in brains.</p>



<ul class="wp-block-list">
<li><a href="https://www.nikolaykukushkin.com/about">Niko's website</a>.</li>



<li>Twitter:&nbsp;<a href="https://x.com/niko_kukushkin">@niko_kukushkin</a>.</li>



<li>Book:
<ul class="wp-block-list">
<li><a href="https://amzn.to/46WT6UO">One Hand Clapping: Unraveling the Mystery of the Human Mind</a></li>
</ul>
</li>
</ul>



<p><a href="https://www.thetransmitter.org/wp-content/uploads/2025/10/BI-222-transcript-kukushkin.pdf" target="_blank" rel="noreferrer noopener">Read the transcript</a>.</p>



<p>0:00 - Intro
9:28 - Studying memory in cells
10:14 - Who the book is for
17:57 - Studying memory in cells
21:53 - What is memory?
29:49 - Book
29:52 - How the book came about
37:56 - Central message of the book
44:07 - Meaning in nature
49:09 - Meaning and essence
51:55 - Multicellularity and ant colonies
57:43 - Eukaryotes and complexification
1:03:38 - Why do we have brains?
1:06:17 - Emergence
1:10:58 - Language
1:12:41 - Human evolution
1:14:41 - Artificial intelligence, meaning and essences
1:25:49 - Consciousness</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2536/222.mp3" length="86248657" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.



Nikolay Kukushkin is an associate professor at New York University, and a senior scientist at Thomas Carew’s laboratory at the Center for Neural Science. He describes himself as a "molecular philosopher", owing to his day job as a molecular biologist and his broad perspective on how it "hangs together", in the words of Wilfrid Sellers, who in 1962 wrote, “The aim of philosophy, abstractly formulated, is to understand how things in the broadest possible sense of the term hang together in the broadest possible sense of the term”.



That is what Niko does in his book One Hand Clapping: Unraveling the Mystery of the Human Mind.



This book is about essences across spatial scales in nature. More precisely, it's about giving names to what is fundamental, or essential, to how things and processes function in nature. Niko argues those essences are where meaning resides. That's very abstract, and we'll spell it out more during the discussion. But as an example at the small scale, the essences of carbon and oxygen, respectively, are creation and destruction, which allows metabolism to occur in biological organisms. Moving way up the scale, following this essence perspective leads Niko to the conclusion that there is no separation between our minds and the world, and that instead we should embrace the relational aspect of mind and world as a unifying principle. On the way, via evolution, we discuss many more examples, plus some of his own work studying how memory works in individual cells, not just neurons or populations of neurons in brains.




Niko's website.



Twitter:&nbsp;@niko_kukushkin.



Book:

One Hand Clapping: Unraveling the Mystery of the Human Mind






Read the transcript.



0:00 - Intro
9:28 - Studying memory in cells
10:14 - Who the book is for
17:57 - Studying memory in cells
21:53 - What is memory?
29:49 - Book
29:52 - How the book came about
37:56 - Central message of the book
44:07 - Meaning in nature
49:09 - Meaning and essence
51:55 - Multicellularity and ant colonies
57:43 - Eukaryotes and complexification
1:03:38 - Why do we have brains?
1:06:17 - Emergence
1:10:58 - Language
1:12:41 - Human evolution
1:14:41 - Artificial intelligence, meaning and essences
1:25:49 - Consciousness]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2025/10/web-thumb.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:28:26</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.



Nikolay Kukushkin is an associate professor at New York University, and a senior scientist at Thomas Carew’s laboratory at the Center for Neural Science. He describes himself as a "molecular philosopher", owing to his day job as a molecular biologist and his broad perspective on how it "hangs together", in the words of Wilfrid Sellers, who in 1962 wrote, “The aim of philosophy, abstractly ]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2025/10/web-thumb.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 221 Ann Kennedy: Theory Beneath the Cortical Surface</title>
	<link>https://braininspired.co/podcast/221/</link>
	<pubDate>Wed, 24 Sep 2025 04:00:00 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2533</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Ann Kennedy is Associate Professor at Scripps Research Institute and runs the <a href="https://www.kennedylab.org/">Laboratory for Theoretical Neuroscience and Behavior</a>.</p>



<p>Among other things, Ann has been studying how processes important in life, like survival, threat response, motivation, and pain, are mediated through subcortical brain areas like the hypothalamus. She also pays attention to the time course those life processes require, which has led her to consider how the expression of things like proteins help shape neural processes throughout the brain, so we can behave appropriately in those different contexts.</p>



<p>You'll hear us talk about how this is still a pretty open field in theoretical neuroscience, unlike the historically heavy use of theory in popular brain areas throughout the cortex, and the historically narrow focus on spikes or action potentials as the only game in town when it comes to neural computation. We discuss that and I link in the show notes to a commentary piece Ann wrote, in which she argues for both top-down and bottom-up theoretical approaches.</p>



<p>I also link to her papers about the early evolution of nervous systems, how heterogeneity or diversity of neurons is an advantage for neural computations, and we discuss a kaggle competition she developed to benchmark automated behavioral labels of behaving organisms, so that despite different researchers using different recording systems and setups, analyzing those data will produce consistent labels to better compare across labs and aggregated bigger and better data sets.</p>



<ul class="wp-block-list">
<li><a href="https://www.kennedylab.org/">Laboratory for Theoretical Neuroscience and Behavior</a>.</li>



<li>Social:
<ul class="wp-block-list">
<li><a href="https://bsky.app/profile/antihebbiann.bsky.social">@antihebbiann.bsky.social</a></li>



<li><a href="https://x.com/Antihebbiann">@Antihebbiann</a>&nbsp;</li>
</ul>
</li>



<li>The <a href="https://www.kaggle.com/competitions/MABe-mouse-behavior-detection">Kaggle competition Ann developed</a> to generalize behavior categorization.</li>



<li>Related papers<ul><li><a href="https://www.kennedylab.org/_files/ugd/680470_913f9d06566941a1bb03600464888e76.pdf">Dynamics of neural activity in early nervous system evolution</a>.</li><li><a href="https://www.nature.com/articles/s41583-025-00965-8">Theoretical neuroscience has room to grow</a>.</li></ul>
<ul class="wp-block-list">
<li><a href="https://www.pnas.org/doi/epub/10.1073/pnas.2311885121">Neural heterogeneity controls computations in spiking neural networks</a>.</li>



<li><a href="https://www.biorxiv.org/content/10.1101/2024.02.26.582069v1.full.pdf">A parabrachial hub for the prioritization of survival behavior</a>.</li>



<li><a href="https://www.cell.com/cell/fulltext/S0092-8674(22)01471-4">An approximate line attractor in the hypothalamus encodes an aggressive state</a>.</li>
</ul>
</li>
</ul>



<p>Read the <a href="https://www.thetransmitter.org/wp-content/uploads/2025/09/BI-221-transcript-kennedy.pdf" target="_blank" rel="noreferrer noopener">transcript</a>.</p>



<p>0:00 - Intro
3:36 - Why study subcortical areas?
13:30 - Evolution
15:06 - Dynamical systems and time scales
21:32 - NeuroAI
28:37 - Before there were brains
33:11 - Endogenous spontaneous activity
40:09 - Natural vs artificial
43:09 - Different is more - heterogeneity
45:32 - Neuromodulators and neuropeptide functions
55:47 - Heterogeneity: manifolds, subspaces, and gain
1:02:43 - Control knobs
1:09:45 - Theoretical neuroscience has room to grow
1:19:59 - Hypothalamus
1:20:57 - Subcortical vs "higher" cognition
1:24:53 - 4E cognition
1:26:56 - Behavior benchmarking
1:37:26 - Current challenges
1:39:46 - Advice to young researchers</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Ann Kennedy is Associate Professor at Scripps Research Institute and runs the Laboratory for Theoretical Neuroscience and Behavior.



Among other things, Ann h]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Ann Kennedy is Associate Professor at Scripps Research Institute and runs the <a href="https://www.kennedylab.org/">Laboratory for Theoretical Neuroscience and Behavior</a>.</p>



<p>Among other things, Ann has been studying how processes important in life, like survival, threat response, motivation, and pain, are mediated through subcortical brain areas like the hypothalamus. She also pays attention to the time course those life processes require, which has led her to consider how the expression of things like proteins help shape neural processes throughout the brain, so we can behave appropriately in those different contexts.</p>



<p>You'll hear us talk about how this is still a pretty open field in theoretical neuroscience, unlike the historically heavy use of theory in popular brain areas throughout the cortex, and the historically narrow focus on spikes or action potentials as the only game in town when it comes to neural computation. We discuss that and I link in the show notes to a commentary piece Ann wrote, in which she argues for both top-down and bottom-up theoretical approaches.</p>



<p>I also link to her papers about the early evolution of nervous systems, how heterogeneity or diversity of neurons is an advantage for neural computations, and we discuss a kaggle competition she developed to benchmark automated behavioral labels of behaving organisms, so that despite different researchers using different recording systems and setups, analyzing those data will produce consistent labels to better compare across labs and aggregated bigger and better data sets.</p>



<ul class="wp-block-list">
<li><a href="https://www.kennedylab.org/">Laboratory for Theoretical Neuroscience and Behavior</a>.</li>



<li>Social:
<ul class="wp-block-list">
<li><a href="https://bsky.app/profile/antihebbiann.bsky.social">@antihebbiann.bsky.social</a></li>



<li><a href="https://x.com/Antihebbiann">@Antihebbiann</a>&nbsp;</li>
</ul>
</li>



<li>The <a href="https://www.kaggle.com/competitions/MABe-mouse-behavior-detection">Kaggle competition Ann developed</a> to generalize behavior categorization.</li>



<li>Related papers<ul><li><a href="https://www.kennedylab.org/_files/ugd/680470_913f9d06566941a1bb03600464888e76.pdf">Dynamics of neural activity in early nervous system evolution</a>.</li><li><a href="https://www.nature.com/articles/s41583-025-00965-8">Theoretical neuroscience has room to grow</a>.</li></ul>
<ul class="wp-block-list">
<li><a href="https://www.pnas.org/doi/epub/10.1073/pnas.2311885121">Neural heterogeneity controls computations in spiking neural networks</a>.</li>



<li><a href="https://www.biorxiv.org/content/10.1101/2024.02.26.582069v1.full.pdf">A parabrachial hub for the prioritization of survival behavior</a>.</li>



<li><a href="https://www.cell.com/cell/fulltext/S0092-8674(22)01471-4">An approximate line attractor in the hypothalamus encodes an aggressive state</a>.</li>
</ul>
</li>
</ul>



<p>Read the <a href="https://www.thetransmitter.org/wp-content/uploads/2025/09/BI-221-transcript-kennedy.pdf" target="_blank" rel="noreferrer noopener">transcript</a>.</p>



<p>0:00 - Intro
3:36 - Why study subcortical areas?
13:30 - Evolution
15:06 - Dynamical systems and time scales
21:32 - NeuroAI
28:37 - Before there were brains
33:11 - Endogenous spontaneous activity
40:09 - Natural vs artificial
43:09 - Different is more - heterogeneity
45:32 - Neuromodulators and neuropeptide functions
55:47 - Heterogeneity: manifolds, subspaces, and gain
1:02:43 - Control knobs
1:09:45 - Theoretical neuroscience has room to grow
1:19:59 - Hypothalamus
1:20:57 - Subcortical vs "higher" cognition
1:24:53 - 4E cognition
1:26:56 - Behavior benchmarking
1:37:26 - Current challenges
1:39:46 - Advice to young researchers</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2533/221.mp3" length="100934524" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Ann Kennedy is Associate Professor at Scripps Research Institute and runs the Laboratory for Theoretical Neuroscience and Behavior.



Among other things, Ann has been studying how processes important in life, like survival, threat response, motivation, and pain, are mediated through subcortical brain areas like the hypothalamus. She also pays attention to the time course those life processes require, which has led her to consider how the expression of things like proteins help shape neural processes throughout the brain, so we can behave appropriately in those different contexts.



You'll hear us talk about how this is still a pretty open field in theoretical neuroscience, unlike the historically heavy use of theory in popular brain areas throughout the cortex, and the historically narrow focus on spikes or action potentials as the only game in town when it comes to neural computation. We discuss that and I link in the show notes to a commentary piece Ann wrote, in which she argues for both top-down and bottom-up theoretical approaches.



I also link to her papers about the early evolution of nervous systems, how heterogeneity or diversity of neurons is an advantage for neural computations, and we discuss a kaggle competition she developed to benchmark automated behavioral labels of behaving organisms, so that despite different researchers using different recording systems and setups, analyzing those data will produce consistent labels to better compare across labs and aggregated bigger and better data sets.




Laboratory for Theoretical Neuroscience and Behavior.



Social:

@antihebbiann.bsky.social



@Antihebbiann&nbsp;





The Kaggle competition Ann developed to generalize behavior categorization.



Related papersDynamics of neural activity in early nervous system evolution.Theoretical neuroscience has room to grow.

Neural heterogeneity controls computations in spiking neural networks.



A parabrachial hub for the prioritization of survival behavior.



An approximate line attractor in the hypothalamus encodes an aggressive state.






Read the transcript.



0:00 - Intro
3:36 - Why study subcortical areas?
13:30 - Evolution
15:06 - Dynamical systems and time scales
21:32 - NeuroAI
28:37 - Before there were brains
33:11 - Endogenous spontaneous activity
40:09 - Natural vs artificial
43:09 - Different is more - heterogeneity
45:32 - Neuromodulators and neuropeptide functions
55:47 - Heterogeneity: manifolds, subspaces, and gain
1:02:43 - Control knobs
1:09:45 - Theoretical neuroscience has room to grow
1:19:59 - Hypothalamus
1:20:57 - Subcortical vs "higher" cognition
1:24:53 - 4E cognition
1:26:56 - Behavior benchmarking
1:37:26 - Current challenges
1:39:46 - Advice to young researchers]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2025/09/221-Ann-Kennedy-web-1.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:43:37</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Ann Kennedy is Associate Professor at Scripps Research Institute and runs the Laboratory for Theoretical Neuroscience and Behavior.



Among other things, Ann has been studying how processes important in life, like survival, threat response, motivation, and pain, are mediated through subcortical brain areas like the hypothalamus. She also pays attention to the time course those life processes require, which has led her to consider how the expression of things like proteins help shape neural processes throughout the brain, so we can behave appropriately in those different contexts.



You'll hear us talk about how this is still a pretty open field in theoretical neuroscience, unlike the historically heavy use of theory in popular brain areas throughout the cortex, and the historically narrow focus on spikes or action potentials as the only game in town when it comes to neural computation. We d]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2025/09/221-Ann-Kennedy-web-1.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 220 Michael Breakspear and Mac Shine: Dynamic Systems from Neurons to Brains</title>
	<link>https://braininspired.co/podcast/220/</link>
	<pubDate>Wed, 10 Sep 2025 04:01:00 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2527</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>: <a href="https://www.thetransmitter.org/partners/">https://www.thetransmitter.org/partners/</a></p>



<p>Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released: <a href="https://www.thetransmitter.org/newsletters/">https://www.thetransmitter.org/newsletters/</a></p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p>What changes and what stays the same as you scale from single neurons up to local populations of neurons up to whole brains? How tuning parameters like the gain in some neural populations affects the dynamical and computational properties of the rest of the system.</p>



<p>Those are the main questions my guests today discuss. Michael Breakspear is a professor of Systems Neuroscience and runs the <a href="https://www.systemsneurosciencegroup.com/team">Systems Neuroscience Group</a> at the University of Newcastle in Australia. Mac Shine is back, he was here a few years ago. Mac runs the <a href="https://shine-lab.org/">Shine Lab</a> at the University of Sidney in Australia.</p>



<p>Michael and Mac have been collaborating on the questions I mentioned above, using a systems approach to studying brains and cognition. The short summary of what they discovered in their first collaboration is that turning up or down the gain across broad networks of neurons in the brain affects integration - working together - and segregation - working apart. They map this gain modulation on to the ascending arousal pathway, in which the locus coeruleus projects widely throughout the brain distributing noradrenaline. At a certain sweet spot of gain, integration and segregation are balanced near a bifurcation point, near criticality, which maximizes properties that are good for cognition.</p>



<p>In their recent collaboration, they used a coarse graining procedure inspired by physics to study the collective dynamics of various sizes of neural populations, going from single neurons to large populations of neurons. Here they found that despite different coding properties at different scales, there are also scale-free properties that suggest neural populations of all sizes, from single neurons to brains, can do cognitive stuff useful for the organism. And they found this is a conserved property across many different species, suggesting it's a universal principle of brain dynamics in general.</p>



<p>So we discuss all that, but to get there we talk about what a systems approach to neuroscience is, how systems neuroscience has changed over the years, and how it has inspired the questions Michael and Mac ask.</p>



<ul class="wp-block-list">
<li>Breakspear: <a href="https://www.systemsneurosciencegroup.com/team">Systems Neuroscience Group</a>.
<ul class="wp-block-list">
<li><a href="https://x.com/DrBreaky">@DrBreaky</a>.</li>
</ul>
</li>



<li>Shine: <a href="https://shine-lab.org/">Shine Lab</a>.
<ul class="wp-block-list">
<li><a href="https://x.com/jmacshine">@jmacshine</a>.</li>
</ul>
</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.ngds-ku.org/Papers/J61/Breakspear.pdf">Dynamic models of large-scale brain activity</a></li>



<li><a href="https://www.nature.com/articles/s41467-019-08999-0">Metastable brain waves</a></li>



<li><a href="https://elifesciences.org/articles/31130">The modulation of neural gain facilitates a transition between functional segregation and integration in the brain</a></li>



<li><a href="https://shine-lab.org/wp-content/uploads/2024/10/2024_cell.pdf">Multiscale Organization of Neuronal Activity Unifies Scale-Dependent Theories of Brain Function</a>.</li>



<li><a href="https://shine-lab.org/wp-content/uploads/2025/03/2025_curropin.pdf">The brain that controls itself</a>.</li>



<li><a href="https://ccs.fau.edu/hbblab/pdfs/2024_Hancock_Kelso_NRN.pdf">Metastability demystified — the foundational past, the pragmatic present and the promising future</a>.</li>



<li><a href="https://direct.mit.edu/imag/article/doi/10.1162/IMAG.a.71/131445">Generation of surrogate brain maps preserving spatial autocorrelation through random rotation of geometric eigenmodes</a>.</li>
</ul>
</li>



<li>Related episodes
<ul class="wp-block-list">
<li><a href="https://braininspired.co/podcast/212/">BI 212 John Beggs: Why Brains Seek the Edge of Chaos</a></li>



<li><a href="https://braininspired.co/podcast/216/">BI 216 Woodrow Shew and Keith Hengen: The Nature of Brain Criticality</a></li>



<li><a href="https://braininspired.co/podcast/121/">BI 121 Mac Shine: Systems Neurobiology</a></li>
</ul>
</li>
</ul>



<p><a href="https://www.thetransmitter.org/wp-content/uploads/2025/09/BI-220-transcript-breakspear-shine-1.pdf" target="_blank" rel="noreferrer noopener">Read the transcript.</a></p>



<p>0:00 - Intro
4:28 - Neuroscience vs neurobiology
8:01 - Systems approach
26:52 - Physics for neuroscience
33:15 - Gain and bifurcation: earliest collaboration
55:32 - Multiscale organization
1:17:54 - Roadblocks</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. ]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>: <a href="https://www.thetransmitter.org/partners/">https://www.thetransmitter.org/partners/</a></p>



<p>Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released: <a href="https://www.thetransmitter.org/newsletters/">https://www.thetransmitter.org/newsletters/</a></p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p>What changes and what stays the same as you scale from single neurons up to local populations of neurons up to whole brains? How tuning parameters like the gain in some neural populations affects the dynamical and computational properties of the rest of the system.</p>



<p>Those are the main questions my guests today discuss. Michael Breakspear is a professor of Systems Neuroscience and runs the <a href="https://www.systemsneurosciencegroup.com/team">Systems Neuroscience Group</a> at the University of Newcastle in Australia. Mac Shine is back, he was here a few years ago. Mac runs the <a href="https://shine-lab.org/">Shine Lab</a> at the University of Sidney in Australia.</p>



<p>Michael and Mac have been collaborating on the questions I mentioned above, using a systems approach to studying brains and cognition. The short summary of what they discovered in their first collaboration is that turning up or down the gain across broad networks of neurons in the brain affects integration - working together - and segregation - working apart. They map this gain modulation on to the ascending arousal pathway, in which the locus coeruleus projects widely throughout the brain distributing noradrenaline. At a certain sweet spot of gain, integration and segregation are balanced near a bifurcation point, near criticality, which maximizes properties that are good for cognition.</p>



<p>In their recent collaboration, they used a coarse graining procedure inspired by physics to study the collective dynamics of various sizes of neural populations, going from single neurons to large populations of neurons. Here they found that despite different coding properties at different scales, there are also scale-free properties that suggest neural populations of all sizes, from single neurons to brains, can do cognitive stuff useful for the organism. And they found this is a conserved property across many different species, suggesting it's a universal principle of brain dynamics in general.</p>



<p>So we discuss all that, but to get there we talk about what a systems approach to neuroscience is, how systems neuroscience has changed over the years, and how it has inspired the questions Michael and Mac ask.</p>



<ul class="wp-block-list">
<li>Breakspear: <a href="https://www.systemsneurosciencegroup.com/team">Systems Neuroscience Group</a>.
<ul class="wp-block-list">
<li><a href="https://x.com/DrBreaky">@DrBreaky</a>.</li>
</ul>
</li>



<li>Shine: <a href="https://shine-lab.org/">Shine Lab</a>.
<ul class="wp-block-list">
<li><a href="https://x.com/jmacshine">@jmacshine</a>.</li>
</ul>
</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.ngds-ku.org/Papers/J61/Breakspear.pdf">Dynamic models of large-scale brain activity</a></li>



<li><a href="https://www.nature.com/articles/s41467-019-08999-0">Metastable brain waves</a></li>



<li><a href="https://elifesciences.org/articles/31130">The modulation of neural gain facilitates a transition between functional segregation and integration in the brain</a></li>



<li><a href="https://shine-lab.org/wp-content/uploads/2024/10/2024_cell.pdf">Multiscale Organization of Neuronal Activity Unifies Scale-Dependent Theories of Brain Function</a>.</li>



<li><a href="https://shine-lab.org/wp-content/uploads/2025/03/2025_curropin.pdf">The brain that controls itself</a>.</li>



<li><a href="https://ccs.fau.edu/hbblab/pdfs/2024_Hancock_Kelso_NRN.pdf">Metastability demystified — the foundational past, the pragmatic present and the promising future</a>.</li>



<li><a href="https://direct.mit.edu/imag/article/doi/10.1162/IMAG.a.71/131445">Generation of surrogate brain maps preserving spatial autocorrelation through random rotation of geometric eigenmodes</a>.</li>
</ul>
</li>



<li>Related episodes
<ul class="wp-block-list">
<li><a href="https://braininspired.co/podcast/212/">BI 212 John Beggs: Why Brains Seek the Edge of Chaos</a></li>



<li><a href="https://braininspired.co/podcast/216/">BI 216 Woodrow Shew and Keith Hengen: The Nature of Brain Criticality</a></li>



<li><a href="https://braininspired.co/podcast/121/">BI 121 Mac Shine: Systems Neurobiology</a></li>
</ul>
</li>
</ul>



<p><a href="https://www.thetransmitter.org/wp-content/uploads/2025/09/BI-220-transcript-breakspear-shine-1.pdf" target="_blank" rel="noreferrer noopener">Read the transcript.</a></p>



<p>0:00 - Intro
4:28 - Neuroscience vs neurobiology
8:01 - Systems approach
26:52 - Physics for neuroscience
33:15 - Gain and bifurcation: earliest collaboration
55:32 - Multiscale organization
1:17:54 - Roadblocks</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2527/220.mp3" length="82402417" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership: https://www.thetransmitter.org/partners/



Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released: https://www.thetransmitter.org/newsletters/



To explore more neuroscience news and perspectives, visit thetransmitter.org.



What changes and what stays the same as you scale from single neurons up to local populations of neurons up to whole brains? How tuning parameters like the gain in some neural populations affects the dynamical and computational properties of the rest of the system.



Those are the main questions my guests today discuss. Michael Breakspear is a professor of Systems Neuroscience and runs the Systems Neuroscience Group at the University of Newcastle in Australia. Mac Shine is back, he was here a few years ago. Mac runs the Shine Lab at the University of Sidney in Australia.



Michael and Mac have been collaborating on the questions I mentioned above, using a systems approach to studying brains and cognition. The short summary of what they discovered in their first collaboration is that turning up or down the gain across broad networks of neurons in the brain affects integration - working together - and segregation - working apart. They map this gain modulation on to the ascending arousal pathway, in which the locus coeruleus projects widely throughout the brain distributing noradrenaline. At a certain sweet spot of gain, integration and segregation are balanced near a bifurcation point, near criticality, which maximizes properties that are good for cognition.



In their recent collaboration, they used a coarse graining procedure inspired by physics to study the collective dynamics of various sizes of neural populations, going from single neurons to large populations of neurons. Here they found that despite different coding properties at different scales, there are also scale-free properties that suggest neural populations of all sizes, from single neurons to brains, can do cognitive stuff useful for the organism. And they found this is a conserved property across many different species, suggesting it's a universal principle of brain dynamics in general.



So we discuss all that, but to get there we talk about what a systems approach to neuroscience is, how systems neuroscience has changed over the years, and how it has inspired the questions Michael and Mac ask.




Breakspear: Systems Neuroscience Group.

@DrBreaky.





Shine: Shine Lab.

@jmacshine.





Related papers

Dynamic models of large-scale brain activity



Metastable brain waves



The modulation of neural gain facilitates a transition between functional segregation and integration in the brain



Multiscale Organization of Neuronal Activity Unifies Scale-Dependent Theories of Brain Function.



The brain that controls itself.



Metastability demystified — the foundational past, the pragmatic present and the promising future.



Generation of surrogate brain maps preserving spatial autocorrelation through random rotation of geometric eigenmodes.





Related episodes

BI 212 John Beggs: Why Brains Seek the Edge of Chaos



BI 216 Woodrow Shew and Keith Hengen: The Nature of Brain Criticality



BI 121 Mac Shine: Systems Neurobiology






Read the transcript.



0:00 - Intro
4:28 - Neuroscience vs neurobiology
8:01 - Systems approach
26:52 - Physics for neuroscience
33:15 - Gain and bifurcation: earliest collaboration
55:32 - Multiscale organization
1:17:54 - Roadblocks]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2025/09/thumb-web.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:25:05</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership: https://www.thetransmitter.org/partners/



Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released: https://www.thetransmitter.org/newsletters/



To explore more neuroscience news and perspectives, visit thetransmitter.org.



What changes and what stays the same as you scale from single neurons up to local populations of neurons up to whole brains? How tuning parameters like the gain in some neural populations affects the dynamical and computational properties of the rest of the system.



Those are the main questions m]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2025/09/thumb-web.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 219 Xaq Pitkow: Principles and Constraints of Cognition</title>
	<link>https://braininspired.co/podcast/219/</link>
	<pubDate>Wed, 27 Aug 2025 04:00:00 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2520</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">Brain Inspired email alerts</a> to be notified every time a new Brain Inspired episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p>Xaq Pitkow runs the <a href="https://xaqlab.com/">Lab for the Algorithmic Brain</a> at Carnegie Mellon University. The main theme of our discussion is how Xaq approaches his research into cognition by way of principles, from which his questions and models and methods spring forth. We discuss those principles, and In that light, we discuss some of his specific lines of work and ideas on the theoretical side of trying understand and explain a slew of cognitive processes. A few of the specifics we discuss are:</p>



<ul class="wp-block-list">
<li>How when we present tasks for organisms to solve, they use strategies that are suboptimal relative to the task, but nearly optimal relative to their beliefs about what they need to do - something Xaq calls inverse rational control.</li>



<li>Probabilistic graph networks.</li>



<li>How brains use probabilities to compute.</li>



<li>A new ecological neuroscience project Xaq has started with multiple collaborators.</li>
</ul>



<ul class="wp-block-list">
<li><a href="https://xaqlab.com/">LAB: Lab for the Algorithmic Brain</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://arxiv.org/pdf/2409.02709">How does the brain compute with probabilities?</a></li>



<li><a href="https://www.pnas.org/doi/pdf/10.1073/pnas.1912336117">Rational thoughts in neural codes.</a></li>



<li><a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC11581108/">Control when confidence is costly</a></li>



<li><a href="https://link.springer.com/article/10.1007/s41468-023-00147-4">Generalization of graph network inferences in higher-order graphical models</a>.</li>



<li><a href="https://arxiv.org/pdf/2501.07440">Attention when you need</a>.</li>
</ul>
</li>
</ul>



<p>Read the <a href="https://www.thetransmitter.org/wp-content/uploads/2025/08/BI-219-transcript-pitkow.pdf" target="_blank" rel="noreferrer noopener">transcript</a>.</p>



<p>0:00 - Intro
3:57 - Xaq's approach
8:28 - Inverse rational control
19:19 - Space of input-output functions
24:48 - Cognition for cognition
27:35 - Theory vs. experiment
40:32 - How does the brain compute with probabilities?
1:03:57 - Normative vs kludge
1:07:44 - Ecological neuroscience
1:20:47 - Representations
1:29:34 - Current projects
1:36:04 - Need a synaptome
1:42:20 - Across scales</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. ]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">Brain Inspired email alerts</a> to be notified every time a new Brain Inspired episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p>Xaq Pitkow runs the <a href="https://xaqlab.com/">Lab for the Algorithmic Brain</a> at Carnegie Mellon University. The main theme of our discussion is how Xaq approaches his research into cognition by way of principles, from which his questions and models and methods spring forth. We discuss those principles, and In that light, we discuss some of his specific lines of work and ideas on the theoretical side of trying understand and explain a slew of cognitive processes. A few of the specifics we discuss are:</p>



<ul class="wp-block-list">
<li>How when we present tasks for organisms to solve, they use strategies that are suboptimal relative to the task, but nearly optimal relative to their beliefs about what they need to do - something Xaq calls inverse rational control.</li>



<li>Probabilistic graph networks.</li>



<li>How brains use probabilities to compute.</li>



<li>A new ecological neuroscience project Xaq has started with multiple collaborators.</li>
</ul>



<ul class="wp-block-list">
<li><a href="https://xaqlab.com/">LAB: Lab for the Algorithmic Brain</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://arxiv.org/pdf/2409.02709">How does the brain compute with probabilities?</a></li>



<li><a href="https://www.pnas.org/doi/pdf/10.1073/pnas.1912336117">Rational thoughts in neural codes.</a></li>



<li><a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC11581108/">Control when confidence is costly</a></li>



<li><a href="https://link.springer.com/article/10.1007/s41468-023-00147-4">Generalization of graph network inferences in higher-order graphical models</a>.</li>



<li><a href="https://arxiv.org/pdf/2501.07440">Attention when you need</a>.</li>
</ul>
</li>
</ul>



<p>Read the <a href="https://www.thetransmitter.org/wp-content/uploads/2025/08/BI-219-transcript-pitkow.pdf" target="_blank" rel="noreferrer noopener">transcript</a>.</p>



<p>0:00 - Intro
3:57 - Xaq's approach
8:28 - Inverse rational control
19:19 - Space of input-output functions
24:48 - Cognition for cognition
27:35 - Theory vs. experiment
40:32 - How does the brain compute with probabilities?
1:03:57 - Normative vs kludge
1:07:44 - Ecological neuroscience
1:20:47 - Representations
1:29:34 - Current projects
1:36:04 - Need a synaptome
1:42:20 - Across scales</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2520/219.mp3" length="104065344" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.



Xaq Pitkow runs the Lab for the Algorithmic Brain at Carnegie Mellon University. The main theme of our discussion is how Xaq approaches his research into cognition by way of principles, from which his questions and models and methods spring forth. We discuss those principles, and In that light, we discuss some of his specific lines of work and ideas on the theoretical side of trying understand and explain a slew of cognitive processes. A few of the specifics we discuss are:




How when we present tasks for organisms to solve, they use strategies that are suboptimal relative to the task, but nearly optimal relative to their beliefs about what they need to do - something Xaq calls inverse rational control.



Probabilistic graph networks.



How brains use probabilities to compute.



A new ecological neuroscience project Xaq has started with multiple collaborators.





LAB: Lab for the Algorithmic Brain.



Related papers

How does the brain compute with probabilities?



Rational thoughts in neural codes.



Control when confidence is costly



Generalization of graph network inferences in higher-order graphical models.



Attention when you need.






Read the transcript.



0:00 - Intro
3:57 - Xaq's approach
8:28 - Inverse rational control
19:19 - Space of input-output functions
24:48 - Cognition for cognition
27:35 - Theory vs. experiment
40:32 - How does the brain compute with probabilities?
1:03:57 - Normative vs kludge
1:07:44 - Ecological neuroscience
1:20:47 - Representations
1:29:34 - Current projects
1:36:04 - Need a synaptome
1:42:20 - Across scales]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2025/08/web-thumb-2.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:47:11</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.



Xaq Pitkow runs the Lab for the Algorithmic Brain at Carnegie Mellon University. The main theme of our discussion is how Xaq approaches his research into cognition by way of principles, from which his questions and models and methods spring forth. We discuss those principles, and In that light, we discuss some of his specific lines of work and ideas on the theoretical side of trying unders]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2025/08/web-thumb-2.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 218 Chris Rozell: Brain Stimulation and AI for Mental Disorders</title>
	<link>https://braininspired.co/podcast/218/</link>
	<pubDate>Wed, 13 Aug 2025 10:44:13 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2514</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>We are in an exciting time in the cross-fertilization of the neurotech industry and the cognitive sciences. My guest today is Chris Rozell, who sits in that space that connects neurotech and brain research. Chris runs the <a href="https://siplab.gatech.edu/index.html">Structured Information for Precision Neuroengineering Lab</a> at Georgia Tech University, and he was just named the inaugural director of Georgia Tech’s <a href="https://neuro.gatech.edu/">Institute for Neuroscience, Neurotechnology, and Society</a>. I think this is the first time on brain inspired we've discussed stimulating brains to treat mental disorders. I think. Today we talk about Chris's work establishing a biomarker from brain recordings of patients with treatment resistant depression, a specific form of depression. These are patients who have deep brain stimulation electrodes implanted in an effort to treat their depression. Chris and his team used that stimulation in conjunction with brain recordings and machine learning tools to predict how effective the treatment will be under what circumstances, and so on, to help psychiatrists better treat their patients. We'll get into the details and surrounding issues. Toward the end we also talk about Chris's unique background and path and approach, and why he thinks interdisciplinary research is so important. He's one of the most genuinely well intentioned people I've met, and I hope you're inspired by his research and his story.</p>



<ul class="wp-block-list">
<li><a href="https://siplab.gatech.edu/index.html">Structured Information for Precision Neuroengineering Lab</a>.</li>



<li>Twitter:&nbsp;<a href="https://x.com/crozSciTech">@crozSciTech</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://go.nature.com/48lmlzC">Cingulate dynamics track depression recovery with deep brain stimulation</a>.</li>
</ul>
</li>



<li><a href="https://www.storycollider.org/stories/2025/7/11/wired-lives-stories-about-brain-computer-interfaces">Story Collider: Wired Lives</a></li>
</ul>



<p>0:00 - Intro
3:20 - Overview of the study
17:11 - Closed and open loop stimulation
19:34 - Predicting recovery
28:45 - Control knob for treatment
39:04 - Historical and modern brain stimulation
49:07 - Treatment resistant depression
53:44 - Control nodes complex systems
1:01:06 - Explainable generative AI for a biomarker
1:16:40 - Where are we and what are the obstacles?
1:21:32 - Interface Neuro
1:24:55 - Why Chris cares</p>



<p>Read the <a href="https://www.thetransmitter.org/wp-content/uploads/2025/08/BI-218-transcript-production.pdf" target="_blank" rel="noreferrer noopener">transcript</a>.</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









We are in an exciting time in the cross-fertilization of the neurotech industry and the cognitive sciences. My guest today is Chris Rozell, who sits in that spa]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>We are in an exciting time in the cross-fertilization of the neurotech industry and the cognitive sciences. My guest today is Chris Rozell, who sits in that space that connects neurotech and brain research. Chris runs the <a href="https://siplab.gatech.edu/index.html">Structured Information for Precision Neuroengineering Lab</a> at Georgia Tech University, and he was just named the inaugural director of Georgia Tech’s <a href="https://neuro.gatech.edu/">Institute for Neuroscience, Neurotechnology, and Society</a>. I think this is the first time on brain inspired we've discussed stimulating brains to treat mental disorders. I think. Today we talk about Chris's work establishing a biomarker from brain recordings of patients with treatment resistant depression, a specific form of depression. These are patients who have deep brain stimulation electrodes implanted in an effort to treat their depression. Chris and his team used that stimulation in conjunction with brain recordings and machine learning tools to predict how effective the treatment will be under what circumstances, and so on, to help psychiatrists better treat their patients. We'll get into the details and surrounding issues. Toward the end we also talk about Chris's unique background and path and approach, and why he thinks interdisciplinary research is so important. He's one of the most genuinely well intentioned people I've met, and I hope you're inspired by his research and his story.</p>



<ul class="wp-block-list">
<li><a href="https://siplab.gatech.edu/index.html">Structured Information for Precision Neuroengineering Lab</a>.</li>



<li>Twitter:&nbsp;<a href="https://x.com/crozSciTech">@crozSciTech</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://go.nature.com/48lmlzC">Cingulate dynamics track depression recovery with deep brain stimulation</a>.</li>
</ul>
</li>



<li><a href="https://www.storycollider.org/stories/2025/7/11/wired-lives-stories-about-brain-computer-interfaces">Story Collider: Wired Lives</a></li>
</ul>



<p>0:00 - Intro
3:20 - Overview of the study
17:11 - Closed and open loop stimulation
19:34 - Predicting recovery
28:45 - Control knob for treatment
39:04 - Historical and modern brain stimulation
49:07 - Treatment resistant depression
53:44 - Control nodes complex systems
1:01:06 - Explainable generative AI for a biomarker
1:16:40 - Where are we and what are the obstacles?
1:21:32 - Interface Neuro
1:24:55 - Why Chris cares</p>



<p>Read the <a href="https://www.thetransmitter.org/wp-content/uploads/2025/08/BI-218-transcript-production.pdf" target="_blank" rel="noreferrer noopener">transcript</a>.</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2514/218.mp3" length="103553466" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









We are in an exciting time in the cross-fertilization of the neurotech industry and the cognitive sciences. My guest today is Chris Rozell, who sits in that space that connects neurotech and brain research. Chris runs the Structured Information for Precision Neuroengineering Lab at Georgia Tech University, and he was just named the inaugural director of Georgia Tech’s Institute for Neuroscience, Neurotechnology, and Society. I think this is the first time on brain inspired we've discussed stimulating brains to treat mental disorders. I think. Today we talk about Chris's work establishing a biomarker from brain recordings of patients with treatment resistant depression, a specific form of depression. These are patients who have deep brain stimulation electrodes implanted in an effort to treat their depression. Chris and his team used that stimulation in conjunction with brain recordings and machine learning tools to predict how effective the treatment will be under what circumstances, and so on, to help psychiatrists better treat their patients. We'll get into the details and surrounding issues. Toward the end we also talk about Chris's unique background and path and approach, and why he thinks interdisciplinary research is so important. He's one of the most genuinely well intentioned people I've met, and I hope you're inspired by his research and his story.




Structured Information for Precision Neuroengineering Lab.



Twitter:&nbsp;@crozSciTech.



Related papers

Cingulate dynamics track depression recovery with deep brain stimulation.





Story Collider: Wired Lives




0:00 - Intro
3:20 - Overview of the study
17:11 - Closed and open loop stimulation
19:34 - Predicting recovery
28:45 - Control knob for treatment
39:04 - Historical and modern brain stimulation
49:07 - Treatment resistant depression
53:44 - Control nodes complex systems
1:01:06 - Explainable generative AI for a biomarker
1:16:40 - Where are we and what are the obstacles?
1:21:32 - Interface Neuro
1:24:55 - Why Chris cares



Read the transcript.]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2025/08/website-thumb.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:46:39</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









We are in an exciting time in the cross-fertilization of the neurotech industry and the cognitive sciences. My guest today is Chris Rozell, who sits in that space that connects neurotech and brain research. Chris runs the Structured Information for Precision Neuroengineering Lab at Georgia Tech University, and he was just named the inaugural director of Georgia Tech’s Institute for Neuroscience, Neurotechnology, and Society. I think this is the first time on brain inspired we've discussed stimulating brains to treat mental disorders. I think. Today we talk about Chris's work establishing a biomarker from brain recordings of patients with treatment resistant depression, a specific form of depression. These are patients who have deep brain stimulation electrodes implanted in an effort to treat their depression. Chris and his team used that stimulation in conjunction with brain recordings and ma]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2025/08/website-thumb.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 217 Jennifer Prendki: Consciousness, Life, AI, and Quantum Physics</title>
	<link>https://braininspired.co/podcast/217/</link>
	<pubDate>Wed, 30 Jul 2025 04:00:00 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2511</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">Brain Inspired email alerts</a> to be notified every time a new Brain Inspired episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p>Do AI engineers need to emulate some processes and features found only in living organisms at the moment, like how brains are inextricably integrated with bodies? Is consciousness necessary for AI entities if we want them to play nice with us? Is quantum physics part of that story, or a key part, or <em>the</em> key part? Jennifer Prendki believes if we continue to scale AI, it will get us more of the same of what we have today, and that we should look to biology, life, and possibly consciousness to enhance AI. Jennifer is a former particle physicist turned entrepreneur and AI expert, focusing on curating the right kinds and forms of data to train AI, and in that vein she led those efforts at Deepmind on the foundation models ubiquitous in our lives now.</p>



<p>I was curious why someone with that background would come to the conclusion that AI needs inspiration from life, biology, and consciousness to move forward gracefully, and that it would be useful to better understand those processes in ourselves before trying to build what some people call AGI, whatever that is. Her perspective is a rarity among her cohorts, which we also discuss. And get this: she's interested in these topics because she cares about what happens to the planet and to us as a species. Perhaps also a rarity among those charging ahead to dominate profits and win the race</p>



<ul class="wp-block-list">
<li>Jennifer's website: <a href="https://www.quantumofdata.com/">Quantum of Data</a>.</li>



<li>The blog posts we discuss:
<ul class="wp-block-list">
<li><a href="https://www.quantumofdata.com/blog-posts/the-myth-of-emergence">The Myth of Emergence</a></li>



<li><a href="https://www.quantumofdata.com/blog-posts/embodiment-sentience-why-the-body-still-matters">Embodiment &amp; Sentience: Why the Body still Matters</a></li>



<li><a href="https://www.quantumofdata.com/blog-posts/the-architecture-of-synthetic-consciousness">The Architecture of Synthetic Consciousness</a></li>



<li><a href="https://www.quantumofdata.com/blog-posts/on-time-and-consciousness">On Time and Consciousness</a></li>



<li><a href="https://www.quantumofdata.com/blog-posts/superalignment-and-the-question-of-ai-personhood">Superalignment and the Question of AI Personhood</a>.</li>
</ul>
</li>
</ul>



<p>Read the <a href="https://www.thetransmitter.org/wp-content/uploads/2025/07/FINAL_BI-217-transcript-prendki.pdf" target="_blank" rel="noreferrer noopener">transcript</a>.</p>



<p>0:00 - Intro
3:25 - Jennifer's background
13:10 - Consciousness
16:38 - Life and consciousness
23:16 - Superalignment
40:11 - Quantum
1:04:45 - Wetware and biological mimicry
1:15:03 - Neural interfaces
1:16:48 - AI ethics
1:2:35 - AI models are not models
1:27:13 - What scaling will get us
1:39:53 - Current roadblocks
1:43:19 - Philosophy</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. ]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">Brain Inspired email alerts</a> to be notified every time a new Brain Inspired episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p>Do AI engineers need to emulate some processes and features found only in living organisms at the moment, like how brains are inextricably integrated with bodies? Is consciousness necessary for AI entities if we want them to play nice with us? Is quantum physics part of that story, or a key part, or <em>the</em> key part? Jennifer Prendki believes if we continue to scale AI, it will get us more of the same of what we have today, and that we should look to biology, life, and possibly consciousness to enhance AI. Jennifer is a former particle physicist turned entrepreneur and AI expert, focusing on curating the right kinds and forms of data to train AI, and in that vein she led those efforts at Deepmind on the foundation models ubiquitous in our lives now.</p>



<p>I was curious why someone with that background would come to the conclusion that AI needs inspiration from life, biology, and consciousness to move forward gracefully, and that it would be useful to better understand those processes in ourselves before trying to build what some people call AGI, whatever that is. Her perspective is a rarity among her cohorts, which we also discuss. And get this: she's interested in these topics because she cares about what happens to the planet and to us as a species. Perhaps also a rarity among those charging ahead to dominate profits and win the race</p>



<ul class="wp-block-list">
<li>Jennifer's website: <a href="https://www.quantumofdata.com/">Quantum of Data</a>.</li>



<li>The blog posts we discuss:
<ul class="wp-block-list">
<li><a href="https://www.quantumofdata.com/blog-posts/the-myth-of-emergence">The Myth of Emergence</a></li>



<li><a href="https://www.quantumofdata.com/blog-posts/embodiment-sentience-why-the-body-still-matters">Embodiment &amp; Sentience: Why the Body still Matters</a></li>



<li><a href="https://www.quantumofdata.com/blog-posts/the-architecture-of-synthetic-consciousness">The Architecture of Synthetic Consciousness</a></li>



<li><a href="https://www.quantumofdata.com/blog-posts/on-time-and-consciousness">On Time and Consciousness</a></li>



<li><a href="https://www.quantumofdata.com/blog-posts/superalignment-and-the-question-of-ai-personhood">Superalignment and the Question of AI Personhood</a>.</li>
</ul>
</li>
</ul>



<p>Read the <a href="https://www.thetransmitter.org/wp-content/uploads/2025/07/FINAL_BI-217-transcript-prendki.pdf" target="_blank" rel="noreferrer noopener">transcript</a>.</p>



<p>0:00 - Intro
3:25 - Jennifer's background
13:10 - Consciousness
16:38 - Life and consciousness
23:16 - Superalignment
40:11 - Quantum
1:04:45 - Wetware and biological mimicry
1:15:03 - Neural interfaces
1:16:48 - AI ethics
1:2:35 - AI models are not models
1:27:13 - What scaling will get us
1:39:53 - Current roadblocks
1:43:19 - Philosophy</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2511/217.mp3" length="105719261" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.



Do AI engineers need to emulate some processes and features found only in living organisms at the moment, like how brains are inextricably integrated with bodies? Is consciousness necessary for AI entities if we want them to play nice with us? Is quantum physics part of that story, or a key part, or the key part? Jennifer Prendki believes if we continue to scale AI, it will get us more of the same of what we have today, and that we should look to biology, life, and possibly consciousness to enhance AI. Jennifer is a former particle physicist turned entrepreneur and AI expert, focusing on curating the right kinds and forms of data to train AI, and in that vein she led those efforts at Deepmind on the foundation models ubiquitous in our lives now.



I was curious why someone with that background would come to the conclusion that AI needs inspiration from life, biology, and consciousness to move forward gracefully, and that it would be useful to better understand those processes in ourselves before trying to build what some people call AGI, whatever that is. Her perspective is a rarity among her cohorts, which we also discuss. And get this: she's interested in these topics because she cares about what happens to the planet and to us as a species. Perhaps also a rarity among those charging ahead to dominate profits and win the race




Jennifer's website: Quantum of Data.



The blog posts we discuss:

The Myth of Emergence



Embodiment &amp; Sentience: Why the Body still Matters



The Architecture of Synthetic Consciousness



On Time and Consciousness



Superalignment and the Question of AI Personhood.






Read the transcript.



0:00 - Intro
3:25 - Jennifer's background
13:10 - Consciousness
16:38 - Life and consciousness
23:16 - Superalignment
40:11 - Quantum
1:04:45 - Wetware and biological mimicry
1:15:03 - Neural interfaces
1:16:48 - AI ethics
1:2:35 - AI models are not models
1:27:13 - What scaling will get us
1:39:53 - Current roadblocks
1:43:19 - Philosophy]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2025/07/websitethumb.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:48:53</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.



Do AI engineers need to emulate some processes and features found only in living organisms at the moment, like how brains are inextricably integrated with bodies? Is consciousness necessary for AI entities if we want them to play nice with us? Is quantum physics part of that story, or a key part, or the key part? Jennifer Prendki believes if we continue to scale AI, it will get us more of ]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2025/07/websitethumb.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 216 Woodrow Shew and Keith Hengen: The Nature of Brain Criticality</title>
	<link>https://braininspired.co/podcast/216/</link>
	<pubDate>Wed, 16 Jul 2025 04:00:00 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2507</guid>
	<description><![CDATA[<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>







<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">Brain Inspired email alerts</a> to be notified every time a new Brain Inspired episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p>A few episodes ago, episode 212, I <a href="https://braininspired.co/podcast/212/">conversed with John Beggs</a> about how criticality might be an important dynamic regime of brain function to optimize our cognition and behavior. Today we continue and extend that exploration with a few other folks in the criticality world.</p>



<p>Woodrow Shew is a professor and runs the <a href="https://www.woodrowshew.com/home">Shew Lab</a> at the University of Arkansas. Keith Hengen is an associate professor and runs the <a href="https://hengenlab.org/">Hengen Lab</a> at Washington University in St. Louis Missouri. Together, they are Hengen and Shew on a recent review paper in Neuron, titled <a href="https://www.cell.com/neuron/fulltext/S0896-6273(25)00391-5">Is criticality a unified setpoint of brain function?</a> In the review they argue that criticality is a kind of homeostatic goal of neural activity, describing multiple properties and signatures of criticality, they discuss multiple testable predictions of their thesis, and they address the historical and current controversies surrounding criticality in the brain, surveying what Woody thinks is all the past studies on criticality, which is over 300. And they offer a account of why many of these past studies did not find criticality, but looking through a modern lens they most likely would. We discuss some of the topics in their paper, but we also dance around their current thoughts about things like the nature and implications of being nearer and farther from critical dynamics, the relation between criticality and neural manifolds, and a lot more. You get to experience Woody and Keith thinking in real time about these things, which I hope you appreciate.</p>



<ul class="wp-block-list">
<li><a href="https://www.woodrowshew.com/home">Shew Lab</a>.  <a href="https://x.com/ShewLab">@ShewLab</a></li>



<li><a href="https://hengenlab.org/">Hengen Lab</a>.</li>



<li><a href="https://www.cell.com/neuron/fulltext/S0896-6273(25)00391-5">Is criticality a unified setpoint of brain function?</a></li>
</ul>



<p>Read the&nbsp;<a href="https://www.thetransmitter.org/wp-content/uploads/2025/07/BI-216-transcript-hengen-shew-1.pdf" target="_blank" rel="noreferrer noopener">transcript</a>.</p>



<p>0:00 - Intro
3:41 - Collaborating
6:22 - Criticality community
14:47 - Tasks vs. Naturalistic
20:50 - Nature of criticality
25:47 - Deviating from criticality
33:45 - Sleep for criticality
38:41 - Neuromodulation for criticality
40:45 - Criticality Definition part 1: scale invariance
43:14 - Criticality Definition part 2: At a boundary
51:56 - New method to assess criticality
56:12 - Types of criticality
1:02:23 - Value of criticality versus other metrics
1:15:21 - Manifolds and criticality
1:26:06 - Current challenges</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Vi]]></itunes:subtitle>
	<content:encoded><![CDATA[<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>







<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">Brain Inspired email alerts</a> to be notified every time a new Brain Inspired episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p>A few episodes ago, episode 212, I <a href="https://braininspired.co/podcast/212/">conversed with John Beggs</a> about how criticality might be an important dynamic regime of brain function to optimize our cognition and behavior. Today we continue and extend that exploration with a few other folks in the criticality world.</p>



<p>Woodrow Shew is a professor and runs the <a href="https://www.woodrowshew.com/home">Shew Lab</a> at the University of Arkansas. Keith Hengen is an associate professor and runs the <a href="https://hengenlab.org/">Hengen Lab</a> at Washington University in St. Louis Missouri. Together, they are Hengen and Shew on a recent review paper in Neuron, titled <a href="https://www.cell.com/neuron/fulltext/S0896-6273(25)00391-5">Is criticality a unified setpoint of brain function?</a> In the review they argue that criticality is a kind of homeostatic goal of neural activity, describing multiple properties and signatures of criticality, they discuss multiple testable predictions of their thesis, and they address the historical and current controversies surrounding criticality in the brain, surveying what Woody thinks is all the past studies on criticality, which is over 300. And they offer a account of why many of these past studies did not find criticality, but looking through a modern lens they most likely would. We discuss some of the topics in their paper, but we also dance around their current thoughts about things like the nature and implications of being nearer and farther from critical dynamics, the relation between criticality and neural manifolds, and a lot more. You get to experience Woody and Keith thinking in real time about these things, which I hope you appreciate.</p>



<ul class="wp-block-list">
<li><a href="https://www.woodrowshew.com/home">Shew Lab</a>.  <a href="https://x.com/ShewLab">@ShewLab</a></li>



<li><a href="https://hengenlab.org/">Hengen Lab</a>.</li>



<li><a href="https://www.cell.com/neuron/fulltext/S0896-6273(25)00391-5">Is criticality a unified setpoint of brain function?</a></li>
</ul>



<p>Read the&nbsp;<a href="https://www.thetransmitter.org/wp-content/uploads/2025/07/BI-216-transcript-hengen-shew-1.pdf" target="_blank" rel="noreferrer noopener">transcript</a>.</p>



<p>0:00 - Intro
3:41 - Collaborating
6:22 - Criticality community
14:47 - Tasks vs. Naturalistic
20:50 - Nature of criticality
25:47 - Deviating from criticality
33:45 - Sleep for criticality
38:41 - Neuromodulation for criticality
40:45 - Criticality Definition part 1: scale invariance
43:14 - Criticality Definition part 2: At a boundary
51:56 - New method to assess criticality
56:12 - Types of criticality
1:02:23 - Value of criticality versus other metrics
1:15:21 - Manifolds and criticality
1:26:06 - Current challenges</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2507/216.mp3" length="91920669" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.



A few episodes ago, episode 212, I conversed with John Beggs about how criticality might be an important dynamic regime of brain function to optimize our cognition and behavior. Today we continue and extend that exploration with a few other folks in the criticality world.



Woodrow Shew is a professor and runs the Shew Lab at the University of Arkansas. Keith Hengen is an associate professor and runs the Hengen Lab at Washington University in St. Louis Missouri. Together, they are Hengen and Shew on a recent review paper in Neuron, titled Is criticality a unified setpoint of brain function? In the review they argue that criticality is a kind of homeostatic goal of neural activity, describing multiple properties and signatures of criticality, they discuss multiple testable predictions of their thesis, and they address the historical and current controversies surrounding criticality in the brain, surveying what Woody thinks is all the past studies on criticality, which is over 300. And they offer a account of why many of these past studies did not find criticality, but looking through a modern lens they most likely would. We discuss some of the topics in their paper, but we also dance around their current thoughts about things like the nature and implications of being nearer and farther from critical dynamics, the relation between criticality and neural manifolds, and a lot more. You get to experience Woody and Keith thinking in real time about these things, which I hope you appreciate.




Shew Lab.  @ShewLab



Hengen Lab.



Is criticality a unified setpoint of brain function?




Read the&nbsp;transcript.



0:00 - Intro
3:41 - Collaborating
6:22 - Criticality community
14:47 - Tasks vs. Naturalistic
20:50 - Nature of criticality
25:47 - Deviating from criticality
33:45 - Sleep for criticality
38:41 - Neuromodulation for criticality
40:45 - Criticality Definition part 1: scale invariance
43:14 - Criticality Definition part 2: At a boundary
51:56 - New method to assess criticality
56:12 - Types of criticality
1:02:23 - Value of criticality versus other metrics
1:15:21 - Manifolds and criticality
1:26:06 - Current challenges]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2025/07/216-Keith-Woody-website.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:34:21</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.



A few episodes ago, episode 212, I conversed with John Beggs about how criticality might be an important dynamic regime of brain function to optimize our cognition and behavior. Today we continue and extend that exploration with a few other folks in the criticality world.



Woodrow Shew is a professor and runs the Shew Lab at the University of Arkansas. Keith Hengen is an associate professo]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2025/07/216-Keith-Woody-website.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 215 Xiao-Jing Wang: Theoretical Neuroscience Comes of Age</title>
	<link>https://braininspired.co/podcast/215/</link>
	<pubDate>Wed, 02 Jul 2025 14:24:46 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2504</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">Brain Inspired email alerts</a> to be notified every time a new Brain Inspired episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p>Xiao-Jing Wang is a Distinguished Global Professor of Neuroscience at NYU</p>



<p>Xiao-Jing was born and grew up in China, spent 8 years in Belgium studying theoretical physics like nonlinear dynamical systems and deterministic chaos. And as he says it, he arrived from Brussels to California as a postdoc, and in one day switched from French to English, from European to American&nbsp;culture, and physics to neuroscience. I know Xiao-Jing as a legend in non-human primate neurophysiology and modeling, paving the way for the rest of us to study brain activity related cognitive functions like working memory and decision-making.</p>



<p>He has just released his new textbook, <a href="https://amzn.to/4emD7lh">Theoretical Neuroscience: Understanding Cognition</a>, which covers the history and current research on modeling cognitive functions from the very simple to the very cognitive. The book is also somewhat philosophical, arguing that we need to update our approach to explaining how brains function, to go beyond Marr's levels and enter a cross-level mechanistic explanatory pursuit, which we discuss. I just learned he even cites my own PhD research, studying metacognition in nonhuman primates - so you know it's a great book. Learn more about Xiao-Jing and the book in the show notes. It was fun having one of my heroes on the podcast, and I hope you enjoy our discussion.</p>



<ul class="wp-block-list">
<li><a href="https://www.cns.nyu.edu/wanglab/">Computational Laboratory of Cortical Dynamics</a></li>



<li>Book: <a href="https://amzn.to/4emD7lh">Theoretical Neuroscience: Understanding Cognition</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.cns.nyu.edu/wanglab/publications/pdf/wang04.pnas.pdf">Division of labor among distinct subtypes of inhibitory neurons in a cortical microcircuit of working memory</a>.</li>



<li><a href="https://www.cns.nyu.edu/wanglab/publications/pdf/wang.nrns2020.pdf">Macroscopic gradients of synaptic excitation and inhibition across the neocortex</a>.</li>



<li><a href="https://www.cns.nyu.edu/wanglab/publications/pdf/wang.arns2022.pdf">Theory of the multiregional neocortex: large-scale neural dynamics and distributed cognition</a>.</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
3:08 - Why the book now?
11:00 - Modularity in neuro vs AI
14:01 - Working memory and modularity
22:37 - Canonical cortical microcircuits
25:53 - Gradient of inhibitory neurons
27:47 - Comp neuro then and now
45:35 - Cross-level mechanistic understanding
1:13:38 - Bifurcation
1:24:51 - Bifurcation and degeneracy
1:34:02 - Control theory
1:35:41 - Psychiatric disorders
1:39:14 - Beyond dynamical systems
1:43:447 - Mouse as a model
1:48:11 - AI needs a PFC</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. ]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">Brain Inspired email alerts</a> to be notified every time a new Brain Inspired episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p>Xiao-Jing Wang is a Distinguished Global Professor of Neuroscience at NYU</p>



<p>Xiao-Jing was born and grew up in China, spent 8 years in Belgium studying theoretical physics like nonlinear dynamical systems and deterministic chaos. And as he says it, he arrived from Brussels to California as a postdoc, and in one day switched from French to English, from European to American&nbsp;culture, and physics to neuroscience. I know Xiao-Jing as a legend in non-human primate neurophysiology and modeling, paving the way for the rest of us to study brain activity related cognitive functions like working memory and decision-making.</p>



<p>He has just released his new textbook, <a href="https://amzn.to/4emD7lh">Theoretical Neuroscience: Understanding Cognition</a>, which covers the history and current research on modeling cognitive functions from the very simple to the very cognitive. The book is also somewhat philosophical, arguing that we need to update our approach to explaining how brains function, to go beyond Marr's levels and enter a cross-level mechanistic explanatory pursuit, which we discuss. I just learned he even cites my own PhD research, studying metacognition in nonhuman primates - so you know it's a great book. Learn more about Xiao-Jing and the book in the show notes. It was fun having one of my heroes on the podcast, and I hope you enjoy our discussion.</p>



<ul class="wp-block-list">
<li><a href="https://www.cns.nyu.edu/wanglab/">Computational Laboratory of Cortical Dynamics</a></li>



<li>Book: <a href="https://amzn.to/4emD7lh">Theoretical Neuroscience: Understanding Cognition</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.cns.nyu.edu/wanglab/publications/pdf/wang04.pnas.pdf">Division of labor among distinct subtypes of inhibitory neurons in a cortical microcircuit of working memory</a>.</li>



<li><a href="https://www.cns.nyu.edu/wanglab/publications/pdf/wang.nrns2020.pdf">Macroscopic gradients of synaptic excitation and inhibition across the neocortex</a>.</li>



<li><a href="https://www.cns.nyu.edu/wanglab/publications/pdf/wang.arns2022.pdf">Theory of the multiregional neocortex: large-scale neural dynamics and distributed cognition</a>.</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
3:08 - Why the book now?
11:00 - Modularity in neuro vs AI
14:01 - Working memory and modularity
22:37 - Canonical cortical microcircuits
25:53 - Gradient of inhibitory neurons
27:47 - Comp neuro then and now
45:35 - Cross-level mechanistic understanding
1:13:38 - Bifurcation
1:24:51 - Bifurcation and degeneracy
1:34:02 - Control theory
1:35:41 - Psychiatric disorders
1:39:14 - Beyond dynamical systems
1:43:447 - Mouse as a model
1:48:11 - AI needs a PFC</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2504/215.mp3" length="108836120" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.



Xiao-Jing Wang is a Distinguished Global Professor of Neuroscience at NYU



Xiao-Jing was born and grew up in China, spent 8 years in Belgium studying theoretical physics like nonlinear dynamical systems and deterministic chaos. And as he says it, he arrived from Brussels to California as a postdoc, and in one day switched from French to English, from European to American&nbsp;culture, and physics to neuroscience. I know Xiao-Jing as a legend in non-human primate neurophysiology and modeling, paving the way for the rest of us to study brain activity related cognitive functions like working memory and decision-making.



He has just released his new textbook, Theoretical Neuroscience: Understanding Cognition, which covers the history and current research on modeling cognitive functions from the very simple to the very cognitive. The book is also somewhat philosophical, arguing that we need to update our approach to explaining how brains function, to go beyond Marr's levels and enter a cross-level mechanistic explanatory pursuit, which we discuss. I just learned he even cites my own PhD research, studying metacognition in nonhuman primates - so you know it's a great book. Learn more about Xiao-Jing and the book in the show notes. It was fun having one of my heroes on the podcast, and I hope you enjoy our discussion.




Computational Laboratory of Cortical Dynamics



Book: Theoretical Neuroscience: Understanding Cognition.



Related papers

Division of labor among distinct subtypes of inhibitory neurons in a cortical microcircuit of working memory.



Macroscopic gradients of synaptic excitation and inhibition across the neocortex.



Theory of the multiregional neocortex: large-scale neural dynamics and distributed cognition.






0:00 - Intro
3:08 - Why the book now?
11:00 - Modularity in neuro vs AI
14:01 - Working memory and modularity
22:37 - Canonical cortical microcircuits
25:53 - Gradient of inhibitory neurons
27:47 - Comp neuro then and now
45:35 - Cross-level mechanistic understanding
1:13:38 - Bifurcation
1:24:51 - Bifurcation and degeneracy
1:34:02 - Control theory
1:35:41 - Psychiatric disorders
1:39:14 - Beyond dynamical systems
1:43:447 - Mouse as a model
1:48:11 - AI needs a PFC]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2025/07/website-thumb.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:52:02</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.



Xiao-Jing Wang is a Distinguished Global Professor of Neuroscience at NYU



Xiao-Jing was born and grew up in China, spent 8 years in Belgium studying theoretical physics like nonlinear dynamical systems and deterministic chaos. And as he says it, he arrived from Brussels to California as a postdoc, and in one day switched from French to English, from European to American&nbsp;culture, an]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2025/07/website-thumb.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 214 Nicole Rust: How To Actually Fix Brains and Minds</title>
	<link>https://braininspired.co/podcast/214/</link>
	<pubDate>Wed, 18 Jun 2025 04:00:00 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2501</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Check out this story:</p>



<p><a href="https://www.thetransmitter.org/the-big-picture/what-if-anything-makes-mood-fundamentally-different-from-memory/">What, if anything, makes mood fundamentally different from memory?</a></p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">Brain Inspired email alerts</a> to be notified every time a new Brain Inspired episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>





<p><a href="https://amzn.to/3H0UmvH">Elusive Cures: Why Neuroscience Hasn’t Solved Brain Disorders―and How We Can Change That</a>. Nicole Rust runs the Visual Memory laboratory at the University of Pennsylvania. Her interests have expanded now to include mood and feelings, as you'll hear. And she wrote this book, which contains a plethora of ideas about how we can pave a way forward in neuroscience to help treat mental and brain disorders. We talk about a small plethora of those ideas from her book. which also contains the story partially which will hear of her own journey in thinking about these things from working early on in visual neuroscience to where she is now.</p>



<ul class="wp-block-list">
<li><a href="https://www.nicolecrust.com/">Nicole's website</a>.</li>



<li><a href="https://amzn.to/3H0UmvH">Elusive Cures: Why Neuroscience Hasn’t Solved Brain Disorders―and How We Can Change That</a>.</li>
</ul>



<p>0:00 - Intro
6:12 - Nicole's path
19:25 - The grand plan
25:18 - Robustness and fragility
39:15 - Mood
49:25 - Model everything!
56:26 - Epistemic iteration
1:06:50 - Can we standardize mood?
1:10:36 - Perspective neuroscience
1:20:12 - William Wimsatt
1:25:40 - Consciousness</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. ]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Check out this story:</p>



<p><a href="https://www.thetransmitter.org/the-big-picture/what-if-anything-makes-mood-fundamentally-different-from-memory/">What, if anything, makes mood fundamentally different from memory?</a></p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">Brain Inspired email alerts</a> to be notified every time a new Brain Inspired episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>





<p><a href="https://amzn.to/3H0UmvH">Elusive Cures: Why Neuroscience Hasn’t Solved Brain Disorders―and How We Can Change That</a>. Nicole Rust runs the Visual Memory laboratory at the University of Pennsylvania. Her interests have expanded now to include mood and feelings, as you'll hear. And she wrote this book, which contains a plethora of ideas about how we can pave a way forward in neuroscience to help treat mental and brain disorders. We talk about a small plethora of those ideas from her book. which also contains the story partially which will hear of her own journey in thinking about these things from working early on in visual neuroscience to where she is now.</p>



<ul class="wp-block-list">
<li><a href="https://www.nicolecrust.com/">Nicole's website</a>.</li>



<li><a href="https://amzn.to/3H0UmvH">Elusive Cures: Why Neuroscience Hasn’t Solved Brain Disorders―and How We Can Change That</a>.</li>
</ul>



<p>0:00 - Intro
6:12 - Nicole's path
19:25 - The grand plan
25:18 - Robustness and fragility
39:15 - Mood
49:25 - Model everything!
56:26 - Epistemic iteration
1:06:50 - Can we standardize mood?
1:10:36 - Perspective neuroscience
1:20:12 - William Wimsatt
1:25:40 - Consciousness</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2501/214.mp3" length="90784826" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Check out this story:



What, if anything, makes mood fundamentally different from memory?



Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.





Elusive Cures: Why Neuroscience Hasn’t Solved Brain Disorders―and How We Can Change That. Nicole Rust runs the Visual Memory laboratory at the University of Pennsylvania. Her interests have expanded now to include mood and feelings, as you'll hear. And she wrote this book, which contains a plethora of ideas about how we can pave a way forward in neuroscience to help treat mental and brain disorders. We talk about a small plethora of those ideas from her book. which also contains the story partially which will hear of her own journey in thinking about these things from working early on in visual neuroscience to where she is now.




Nicole's website.



Elusive Cures: Why Neuroscience Hasn’t Solved Brain Disorders―and How We Can Change That.




0:00 - Intro
6:12 - Nicole's path
19:25 - The grand plan
25:18 - Robustness and fragility
39:15 - Mood
49:25 - Model everything!
56:26 - Epistemic iteration
1:06:50 - Can we standardize mood?
1:10:36 - Perspective neuroscience
1:20:12 - William Wimsatt
1:25:40 - Consciousness]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2025/06/thumb-web.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:33:26</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Check out this story:



What, if anything, makes mood fundamentally different from memory?



Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.





Elusive Cures: Why Neuroscience Hasn’t Solved Brain Disorders―and How We Can Change That. Nicole Rust runs the Visual Memory laboratory at the University of Pennsylvania. Her interests have expanded now to include mood and feelings, as you'll hear. And she wrote this book, which contains a plet]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2025/06/thumb-web.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 213 Representations in Minds and Brains</title>
	<link>https://braininspired.co/podcast/213/</link>
	<pubDate>Wed, 04 Jun 2025 04:00:00 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2499</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Check out this series of essays about representations:</p>



<p><a href="https://www.thetransmitter.org/defining-representations/what-are-we-talking-about-clarifying-the-fuzzy-concept-of-representation-in-neuroscience-and-beyond/">What are we talking about? Clarifying the fuzzy concept of representation in neuroscience and beyond</a></p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">Brain Inspired email alerts</a> to be notified every time a new Brain Inspired episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p>What do neuroscientists mean when they use the term representation? That's part of what <a href="https://luishfavela.wixsite.com/luishfavela">Luis Favela</a> and <a href="https://www.edouardmachery.com/">Edouard Machery</a> set out to answer a couple years ago by surveying lots of folks in the cognitive sciences, and they concluded that as a field the term is used in a confused and unclear way. Confused and unclear are technical terms here, and Luis and Edouard explain what they mean in the episode. More recently Luis and Edouard wrote a follow-up piece arguing that maybe it's okay for everyone to use the term in slightly different ways, maybe it helps communication across disciplines, perhaps. My three other guests today, <a href="https://frances-egan.org/index.html">Frances Egan</a>, <a href="https://sites.google.com/site/luosha/home/research">Rosa Cao</a>, and <a href="https://www.blam-lab-jhu.org/">John Krakauer</a> wrote responses to that argument, and on today's episode all those folks are here to further discuss that issue and why it matters. Luis is a part philosopher, part cognitive scientists at Indiana University Bloomington, Edouard is a philosopher and Director of the Center for Philosophy of Science at the University of Pittsburgh, Frances is a philosopher from Rutgers University, Rosa is a neuroscientist-turned philosopher at Stanford University, and John is a neuroscientist among other things, and co-runs the Brain, Learning, Animation, and Movement Lab at Johns Hopkins.</p>



<ul class="wp-block-list">
<li><a href="https://luishfavela.wixsite.com/luishfavela">Luis Favela</a>.
<ul class="wp-block-list">
<li>Favela's book: <a href="https://amzn.to/3LbSgrI">The Ecological Brain: Unifying the Sciences of Brain, Body, and Environment</a></li>
</ul>
</li>



<li><a href="https://www.edouardmachery.com/">Edouard Machery</a>.
<ul class="wp-block-list">
<li>Machery's book: <a href="https://academic.oup.com/book/11923">Doing without Concepts</a></li>
</ul>
</li>



<li><a href="https://frances-egan.org/index.html">Frances Egan</a>.
<ul class="wp-block-list">
<li>Egan's book: <a href="https://amzn.to/4mvsEYs">Deflating Mental Representation</a>.</li>
</ul>
</li>



<li><a href="https://www.blam-lab-jhu.org/">John Krakauer</a>.</li>



<li><a href="https://sites.google.com/site/luosha/home/research">Rosa Cao</a>.
<ul class="wp-block-list">
<li><a href="https://link.springer.com/article/10.1007/s11229-022-03522-3">Paper mentioned: Putting representations to use</a>.</li>
</ul>
</li>



<li>The exchange, in order, discussed on this episode:
<ul class="wp-block-list">
<li><a href="https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2023.1165622/full">Investigating the concept of representation in the neural and psychological sciences</a>.</li>



<li><a href="https://onlinelibrary.wiley.com/doi/10.1111/mila.12531">The concept of representation in the brain sciences: The current status and ways forward</a>.</li>



<li>Commentaries:
<ul class="wp-block-list">
<li><a href="https://onlinelibrary.wiley.com/doi/10.1111/mila.12535">Assessing the landscape of representational concepts: Commentary on Favela and Machery</a>.</li>



<li><a href="https://onlinelibrary.wiley.com/doi/10.1111/mila.12527">Comments on Favela and Machery's The concept of representation in the brain sciences: The current status and ways forward</a>.</li>



<li><a href="https://onlinelibrary.wiley.com/doi/10.1111/mila.12534">Where did real representations go? Commentary on: The concept of representation in the brain sciences: The current status and ways forward by Favela and Machery</a>.</li>



<li>Reply to commentaries:
<ul class="wp-block-list">
<li><a href="https://onlinelibrary.wiley.com/doi/10.1111/mila.12533">Contextualizing, eliminating, or glossing: What to do with unclear scientific concepts like representation</a>.</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
3:55 - What is a representation to a neuroscientist?
14:44 - How to deal with the dilemma
21:20 - Opposing views
31:00 - What's at stake?
51:10 - Neural-only representation
1:01:11 - When "representation" is playing a useful role
1:12:56 - The role of a neuroscientist
1:39:35 - The purpose of "representational talk"
1:53:03 - Non-representational mental phenomenon
1:55:53 - Final thoughts</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. ]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Check out this series of essays about representations:</p>



<p><a href="https://www.thetransmitter.org/defining-representations/what-are-we-talking-about-clarifying-the-fuzzy-concept-of-representation-in-neuroscience-and-beyond/">What are we talking about? Clarifying the fuzzy concept of representation in neuroscience and beyond</a></p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">Brain Inspired email alerts</a> to be notified every time a new Brain Inspired episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p>What do neuroscientists mean when they use the term representation? That's part of what <a href="https://luishfavela.wixsite.com/luishfavela">Luis Favela</a> and <a href="https://www.edouardmachery.com/">Edouard Machery</a> set out to answer a couple years ago by surveying lots of folks in the cognitive sciences, and they concluded that as a field the term is used in a confused and unclear way. Confused and unclear are technical terms here, and Luis and Edouard explain what they mean in the episode. More recently Luis and Edouard wrote a follow-up piece arguing that maybe it's okay for everyone to use the term in slightly different ways, maybe it helps communication across disciplines, perhaps. My three other guests today, <a href="https://frances-egan.org/index.html">Frances Egan</a>, <a href="https://sites.google.com/site/luosha/home/research">Rosa Cao</a>, and <a href="https://www.blam-lab-jhu.org/">John Krakauer</a> wrote responses to that argument, and on today's episode all those folks are here to further discuss that issue and why it matters. Luis is a part philosopher, part cognitive scientists at Indiana University Bloomington, Edouard is a philosopher and Director of the Center for Philosophy of Science at the University of Pittsburgh, Frances is a philosopher from Rutgers University, Rosa is a neuroscientist-turned philosopher at Stanford University, and John is a neuroscientist among other things, and co-runs the Brain, Learning, Animation, and Movement Lab at Johns Hopkins.</p>



<ul class="wp-block-list">
<li><a href="https://luishfavela.wixsite.com/luishfavela">Luis Favela</a>.
<ul class="wp-block-list">
<li>Favela's book: <a href="https://amzn.to/3LbSgrI">The Ecological Brain: Unifying the Sciences of Brain, Body, and Environment</a></li>
</ul>
</li>



<li><a href="https://www.edouardmachery.com/">Edouard Machery</a>.
<ul class="wp-block-list">
<li>Machery's book: <a href="https://academic.oup.com/book/11923">Doing without Concepts</a></li>
</ul>
</li>



<li><a href="https://frances-egan.org/index.html">Frances Egan</a>.
<ul class="wp-block-list">
<li>Egan's book: <a href="https://amzn.to/4mvsEYs">Deflating Mental Representation</a>.</li>
</ul>
</li>



<li><a href="https://www.blam-lab-jhu.org/">John Krakauer</a>.</li>



<li><a href="https://sites.google.com/site/luosha/home/research">Rosa Cao</a>.
<ul class="wp-block-list">
<li><a href="https://link.springer.com/article/10.1007/s11229-022-03522-3">Paper mentioned: Putting representations to use</a>.</li>
</ul>
</li>



<li>The exchange, in order, discussed on this episode:
<ul class="wp-block-list">
<li><a href="https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2023.1165622/full">Investigating the concept of representation in the neural and psychological sciences</a>.</li>



<li><a href="https://onlinelibrary.wiley.com/doi/10.1111/mila.12531">The concept of representation in the brain sciences: The current status and ways forward</a>.</li>



<li>Commentaries:
<ul class="wp-block-list">
<li><a href="https://onlinelibrary.wiley.com/doi/10.1111/mila.12535">Assessing the landscape of representational concepts: Commentary on Favela and Machery</a>.</li>



<li><a href="https://onlinelibrary.wiley.com/doi/10.1111/mila.12527">Comments on Favela and Machery's The concept of representation in the brain sciences: The current status and ways forward</a>.</li>



<li><a href="https://onlinelibrary.wiley.com/doi/10.1111/mila.12534">Where did real representations go? Commentary on: The concept of representation in the brain sciences: The current status and ways forward by Favela and Machery</a>.</li>



<li>Reply to commentaries:
<ul class="wp-block-list">
<li><a href="https://onlinelibrary.wiley.com/doi/10.1111/mila.12533">Contextualizing, eliminating, or glossing: What to do with unclear scientific concepts like representation</a>.</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
3:55 - What is a representation to a neuroscientist?
14:44 - How to deal with the dilemma
21:20 - Opposing views
31:00 - What's at stake?
51:10 - Neural-only representation
1:01:11 - When "representation" is playing a useful role
1:12:56 - The role of a neuroscientist
1:39:35 - The purpose of "representational talk"
1:53:03 - Non-representational mental phenomenon
1:55:53 - Final thoughts</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2499/213.mp3" length="123233565" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Check out this series of essays about representations:



What are we talking about? Clarifying the fuzzy concept of representation in neuroscience and beyond



Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.



What do neuroscientists mean when they use the term representation? That's part of what Luis Favela and Edouard Machery set out to answer a couple years ago by surveying lots of folks in the cognitive sciences, and they concluded that as a field the term is used in a confused and unclear way. Confused and unclear are technical terms here, and Luis and Edouard explain what they mean in the episode. More recently Luis and Edouard wrote a follow-up piece arguing that maybe it's okay for everyone to use the term in slightly different ways, maybe it helps communication across disciplines, perhaps. My three other guests today, Frances Egan, Rosa Cao, and John Krakauer wrote responses to that argument, and on today's episode all those folks are here to further discuss that issue and why it matters. Luis is a part philosopher, part cognitive scientists at Indiana University Bloomington, Edouard is a philosopher and Director of the Center for Philosophy of Science at the University of Pittsburgh, Frances is a philosopher from Rutgers University, Rosa is a neuroscientist-turned philosopher at Stanford University, and John is a neuroscientist among other things, and co-runs the Brain, Learning, Animation, and Movement Lab at Johns Hopkins.




Luis Favela.

Favela's book: The Ecological Brain: Unifying the Sciences of Brain, Body, and Environment





Edouard Machery.

Machery's book: Doing without Concepts





Frances Egan.

Egan's book: Deflating Mental Representation.





John Krakauer.



Rosa Cao.

Paper mentioned: Putting representations to use.





The exchange, in order, discussed on this episode:

Investigating the concept of representation in the neural and psychological sciences.



The concept of representation in the brain sciences: The current status and ways forward.



Commentaries:

Assessing the landscape of representational concepts: Commentary on Favela and Machery.



Comments on Favela and Machery's The concept of representation in the brain sciences: The current status and ways forward.



Where did real representations go? Commentary on: The concept of representation in the brain sciences: The current status and ways forward by Favela and Machery.



Reply to commentaries:

Contextualizing, eliminating, or glossing: What to do with unclear scientific concepts like representation.










0:00 - Intro
3:55 - What is a representation to a neuroscientist?
14:44 - How to deal with the dilemma
21:20 - Opposing views
31:00 - What's at stake?
51:10 - Neural-only representation
1:01:11 - When "representation" is playing a useful role
1:12:56 - The role of a neuroscientist
1:39:35 - The purpose of "representational talk"
1:53:03 - Non-representational mental phenomenon
1:55:53 - Final thoughts]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2025/06/213-Representations-thumb-web.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>02:07:09</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Check out this series of essays about representations:



What are we talking about? Clarifying the fuzzy concept of representation in neuroscience and beyond



Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.



What do neuroscientists mean when they use the term representation? That's part of what Luis Favela and Edouard Machery set out to answer a couple years ago by surveying lots of folks in the cognitive sciences, and they concluded ]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2025/06/213-Representations-thumb-web.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 212 John Beggs: Why Brains Seek the Edge of Chaos</title>
	<link>https://braininspired.co/podcast/212/</link>
	<pubDate>Wed, 21 May 2025 04:00:00 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2496</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">Brain Inspired email alerts</a> to be notified every time a new Brain Inspired episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>





<p>You may have heard of the critical brain hypothesis. It goes something like this: brain activity operates near a dynamical regime called criticality, poised at the sweet spot between too much order and too much chaos, and this is a good thing because systems at criticality are optimized for computing, they maximize information transfer, they maximize the time range over which they operate, and a handful of other good properties. John Beggs has been studying criticality in brains for over 20 years now. His 2003 paper with Deitmar Plenz is one of of the first if not the first to show networks of neurons operating near criticality, and it gets cited in almost every criticality paper I read. John runs the Beggs Lab at Indiana University Bloomington, and a few years ago he literally wrote the book on criticality, called <a href="https://amzn.to/4jJMYD5">The Cortex and the Critical Point: Understanding the Power of Emergence</a>, which I highly recommend as an excellent introduction to the topic, and he continues to work on criticality these days.</p>



<p>On this episode we discuss what criticality is, why and how brains might strive for it, the past and present of how to measure it and why there isn't a consensus on how to measure it, what it means that criticality appears in so many natural systems outside of brains yet we want to say it's a special property of brains. These days John spends plenty of effort defending the criticality hypothesis from critics, so we discuss that, and much more.</p>



<ul class="wp-block-list">
<li><a href="http://www.beggslab.com/">Beggs Lab</a>.</li>



<li>Book:
<ul class="wp-block-list">
<li><a href="https://amzn.to/4jJMYD5">The Cortex and the Critical Point: Understanding the Power of Emergence</a></li>
</ul>
</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.frontiersin.org/journals/computational-neuroscience/articles/10.3389/fncom.2022.703865/full">Addressing skepticism of the critical brain hypothesis</a></li>
</ul>
</li>



<li>Papers John mentioned:
<ul class="wp-block-list">
<li>Tetzlaff et al 2010: <a href="https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1001013">Self-organized criticality in developing neuronal networks.</a></li>



<li>Haldeman and Beggs 2005: <a href="https://pubmed.ncbi.nlm.nih.gov/15783702/">Critical Branching Captures Activity in Living Neural Networks and Maximizes the Number of Metastable States</a>.</li>



<li>Bertschinger et al 2004: <a href="https://proceedings.neurips.cc/paper/2004/hash/f8da71e562ff44a2bc7edf3578c593da-Abstract.html">At the edge of chaos: Real-time computations and self-organized criticality in recurrent neural networks</a>.</li>



<li>Legenstein and Maass 2007: <a href="https://igi-web.tugraz.at/people/maass/psfiles/166.pdf">Edge of chaos and prediction of computational performance for neural circuit models.</a></li>



<li>Kinouchi and Copelli 2006: <a href="https://arxiv.org/abs/q-bio/0601037">Optimal dynamical range of excitable networks at criticality.</a></li>



<li>Chialvo 2010: <a href="https://arxiv.org/abs/1010.2530">Emergent complex neural dynamics.</a>.</li>



<li>Mora and Bialek 2011: <a href="https://www.princeton.edu/~wbialek/our_papers/mora+bialek_11.pdf">Are Biological Systems Poised at Criticality?</a></li>
</ul>
</li>
</ul>



<p><a href="https://www.thetransmitter.org/wp-content/uploads/2025/05/BI-212-transcript-production.pdf" target="_blank" rel="noreferrer noopener">Read the transcript.</a></p>



<p>0:00 - Intro
4:28 - What is criticality?
10:19 - Why is criticality special in brains?
15:34 - Measuring criticality
24:28 - Dynamic range and criticality
28:28 - Criticisms of criticality
31:43 - Current state of critical brain hypothesis
33:34 - Causality and criticality
36:39 - Criticality as a homeostatic set point
38:49 - Is criticality necessary for life?
50:15 - Shooting for criticality far from thermodynamic equilibrium
52:45 - Quasi- and near-criticality
55:03 - Cortex vs. whole brain
58:50 - Structural criticality through development
1:01:09 - Criticality in AI
1:03:56 - Most pressing criticisms of criticality
1:10:08 - Gradients of criticality
1:22:30 - Homeostasis vs. criticality
1:29:57 - Minds and criticality</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. ]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">Brain Inspired email alerts</a> to be notified every time a new Brain Inspired episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>





<p>You may have heard of the critical brain hypothesis. It goes something like this: brain activity operates near a dynamical regime called criticality, poised at the sweet spot between too much order and too much chaos, and this is a good thing because systems at criticality are optimized for computing, they maximize information transfer, they maximize the time range over which they operate, and a handful of other good properties. John Beggs has been studying criticality in brains for over 20 years now. His 2003 paper with Deitmar Plenz is one of of the first if not the first to show networks of neurons operating near criticality, and it gets cited in almost every criticality paper I read. John runs the Beggs Lab at Indiana University Bloomington, and a few years ago he literally wrote the book on criticality, called <a href="https://amzn.to/4jJMYD5">The Cortex and the Critical Point: Understanding the Power of Emergence</a>, which I highly recommend as an excellent introduction to the topic, and he continues to work on criticality these days.</p>



<p>On this episode we discuss what criticality is, why and how brains might strive for it, the past and present of how to measure it and why there isn't a consensus on how to measure it, what it means that criticality appears in so many natural systems outside of brains yet we want to say it's a special property of brains. These days John spends plenty of effort defending the criticality hypothesis from critics, so we discuss that, and much more.</p>



<ul class="wp-block-list">
<li><a href="http://www.beggslab.com/">Beggs Lab</a>.</li>



<li>Book:
<ul class="wp-block-list">
<li><a href="https://amzn.to/4jJMYD5">The Cortex and the Critical Point: Understanding the Power of Emergence</a></li>
</ul>
</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.frontiersin.org/journals/computational-neuroscience/articles/10.3389/fncom.2022.703865/full">Addressing skepticism of the critical brain hypothesis</a></li>
</ul>
</li>



<li>Papers John mentioned:
<ul class="wp-block-list">
<li>Tetzlaff et al 2010: <a href="https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1001013">Self-organized criticality in developing neuronal networks.</a></li>



<li>Haldeman and Beggs 2005: <a href="https://pubmed.ncbi.nlm.nih.gov/15783702/">Critical Branching Captures Activity in Living Neural Networks and Maximizes the Number of Metastable States</a>.</li>



<li>Bertschinger et al 2004: <a href="https://proceedings.neurips.cc/paper/2004/hash/f8da71e562ff44a2bc7edf3578c593da-Abstract.html">At the edge of chaos: Real-time computations and self-organized criticality in recurrent neural networks</a>.</li>



<li>Legenstein and Maass 2007: <a href="https://igi-web.tugraz.at/people/maass/psfiles/166.pdf">Edge of chaos and prediction of computational performance for neural circuit models.</a></li>



<li>Kinouchi and Copelli 2006: <a href="https://arxiv.org/abs/q-bio/0601037">Optimal dynamical range of excitable networks at criticality.</a></li>



<li>Chialvo 2010: <a href="https://arxiv.org/abs/1010.2530">Emergent complex neural dynamics.</a>.</li>



<li>Mora and Bialek 2011: <a href="https://www.princeton.edu/~wbialek/our_papers/mora+bialek_11.pdf">Are Biological Systems Poised at Criticality?</a></li>
</ul>
</li>
</ul>



<p><a href="https://www.thetransmitter.org/wp-content/uploads/2025/05/BI-212-transcript-production.pdf" target="_blank" rel="noreferrer noopener">Read the transcript.</a></p>



<p>0:00 - Intro
4:28 - What is criticality?
10:19 - Why is criticality special in brains?
15:34 - Measuring criticality
24:28 - Dynamic range and criticality
28:28 - Criticisms of criticality
31:43 - Current state of critical brain hypothesis
33:34 - Causality and criticality
36:39 - Criticality as a homeostatic set point
38:49 - Is criticality necessary for life?
50:15 - Shooting for criticality far from thermodynamic equilibrium
52:45 - Quasi- and near-criticality
55:03 - Cortex vs. whole brain
58:50 - Structural criticality through development
1:01:09 - Criticality in AI
1:03:56 - Most pressing criticisms of criticality
1:10:08 - Gradients of criticality
1:22:30 - Homeostasis vs. criticality
1:29:57 - Minds and criticality</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2496/212.mp3" length="91379108" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.





You may have heard of the critical brain hypothesis. It goes something like this: brain activity operates near a dynamical regime called criticality, poised at the sweet spot between too much order and too much chaos, and this is a good thing because systems at criticality are optimized for computing, they maximize information transfer, they maximize the time range over which they operate, and a handful of other good properties. John Beggs has been studying criticality in brains for over 20 years now. His 2003 paper with Deitmar Plenz is one of of the first if not the first to show networks of neurons operating near criticality, and it gets cited in almost every criticality paper I read. John runs the Beggs Lab at Indiana University Bloomington, and a few years ago he literally wrote the book on criticality, called The Cortex and the Critical Point: Understanding the Power of Emergence, which I highly recommend as an excellent introduction to the topic, and he continues to work on criticality these days.



On this episode we discuss what criticality is, why and how brains might strive for it, the past and present of how to measure it and why there isn't a consensus on how to measure it, what it means that criticality appears in so many natural systems outside of brains yet we want to say it's a special property of brains. These days John spends plenty of effort defending the criticality hypothesis from critics, so we discuss that, and much more.




Beggs Lab.



Book:

The Cortex and the Critical Point: Understanding the Power of Emergence





Related papers

Addressing skepticism of the critical brain hypothesis





Papers John mentioned:

Tetzlaff et al 2010: Self-organized criticality in developing neuronal networks.



Haldeman and Beggs 2005: Critical Branching Captures Activity in Living Neural Networks and Maximizes the Number of Metastable States.



Bertschinger et al 2004: At the edge of chaos: Real-time computations and self-organized criticality in recurrent neural networks.



Legenstein and Maass 2007: Edge of chaos and prediction of computational performance for neural circuit models.



Kinouchi and Copelli 2006: Optimal dynamical range of excitable networks at criticality.



Chialvo 2010: Emergent complex neural dynamics..



Mora and Bialek 2011: Are Biological Systems Poised at Criticality?






Read the transcript.



0:00 - Intro
4:28 - What is criticality?
10:19 - Why is criticality special in brains?
15:34 - Measuring criticality
24:28 - Dynamic range and criticality
28:28 - Criticisms of criticality
31:43 - Current state of critical brain hypothesis
33:34 - Causality and criticality
36:39 - Criticality as a homeostatic set point
38:49 - Is criticality necessary for life?
50:15 - Shooting for criticality far from thermodynamic equilibrium
52:45 - Quasi- and near-criticality
55:03 - Cortex vs. whole brain
58:50 - Structural criticality through development
1:01:09 - Criticality in AI
1:03:56 - Most pressing criticisms of criticality
1:10:08 - Gradients of criticality
1:22:30 - Homeostasis vs. criticality
1:29:57 - Minds and criticality]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2025/05/web-thumb.png"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:33:34</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.





You may have heard of the critical brain hypothesis. It goes something like this: brain activity operates near a dynamical regime called criticality, poised at the sweet spot between too much order and too much chaos, and this is a good thing because systems at criticality are optimized for computing, they maximize information transfer, they maximize the time range over which they operat]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2025/05/web-thumb.png"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 211 COGITATE: Testing Theories of Consciousness</title>
	<link>https://braininspired.co/podcast/211/</link>
	<pubDate>Wed, 07 May 2025 04:00:00 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2493</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">Brain Inspired email alerts</a> to be notified every time a new Brain Inspired episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p><a href="https://en-sagol.tau.ac.il/Rony_Hirschhorn">Rony Hirschhorn</a>, <a href="https://alexlepauvre.github.io/AlexLepauvre/">Alex Lepauvre</a>, and <a href="https://www.birmingham.ac.uk/staff/profiles/psychology/ferrante-oscar">Oscar Ferrante</a> are three of many many scientists that comprise the <a href="https://www.arc-cogitate.com/">COGITATE</a> group. COGITATE is an adversarial collaboration project to test theories of consciousness in humans, in this case testing the integrated information theory of consciousness and the global neuronal workspace theory of consciousness. I said it's an adversarial collaboration, so what does that mean. It's adversarial in that two theories of consciousness are being pitted against each other. It's a collaboration in that the proponents of the two theories had to agree on what experiments could be performed that could possibly falsify the claims of either theory. The group has just published the results of the first round of experiments in a paper titled <a href="https://www.nature.com/articles/s41586-025-08888-1">Adversarial testing of global neuronal workspace and integrated information theories of consciousness</a>, and this is what Rony, Alex, and Oscar discuss with me today.</p>



<p>The short summary is that they used a simple task and measured brain activity with three different methods: EEG, MEG, and fMRI, and made predictions about where in the brain correlates of consciousness should be, how that activity should be maintained over time, and what kind of functional connectivity patterns should be present between brain regions. The take home is a mixed bag, with neither theory being fully falsified, but with a ton of data and results for the world to ponder and build on, to hopefully continue to refine and develop theoretical accounts of how brains and consciousness are related.</p>



<p>So we discuss the project itself, many of the challenges they faced, their experiences and reflections working on it and on coming together as a team, the nature of working on an adversarial collaboration, when so much is at stake for the proponents of each theory, and, as you heard last episode with <a href="https://braininspired.co/podcast/210/">Dean Buonomano</a>, when one of the theories, IIT, is surrounded by a bit of controversy itself regarding whether it should even be considered a scientific theory.</p>



<ul class="wp-block-list">
<li><a href="https://www.arc-cogitate.com/">COGITATE</a>.</li>



<li><a href="https://www.birmingham.ac.uk/staff/profiles/psychology/ferrante-oscar">Oscar Ferrante</a>. <a href="https://x.com/ferrante_oscar">@ferrante_oscar</a></li>



<li><a href="https://en-sagol.tau.ac.il/Rony_Hirschhorn">Rony Hirschhorn</a>. <a href="https://x.com/RonyHirsch">@RonyHirsch</a></li>



<li><a href="https://alexlepauvre.github.io/AlexLepauvre/">Alex Lepauvre</a>. <a href="https://x.com/lepauvrealex">@LepauvreAlex</a></li>



<li>Paper: <a href="https://www.nature.com/articles/s41586-025-08888-1">Adversarial testing of global neuronal workspace and integrated information theories of consciousness</a>.</li>



<li><a href="https://braininspired.co/podcast/210/">BI 210 Dean Buonomano: Consciousness, Time, and Organotypic Dynamics</a></li>
</ul>



<p>Read the <a href="https://www.thetransmitter.org/wp-content/uploads/2025/05/Brain-Inspired-211-transcript-1.pdf" target="_blank" rel="noreferrer noopener">transcript</a>.</p>



<p>0:00 - Intro
4:00 - COGITATE
17:42 - How the experiments were developed
32:37 - How data was collected and analyzed
41:24 - Prediction 1: Where is consciousness?
47:51 - The experimental task
1:00:14 - Prediction 2: Duration of consciousness-related activity
1:18:37 - Prediction 3: Inter-areal communication
1:28:28 - Big picture of the results
1:44:25 - Moving forward</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. ]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">Brain Inspired email alerts</a> to be notified every time a new Brain Inspired episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p><a href="https://en-sagol.tau.ac.il/Rony_Hirschhorn">Rony Hirschhorn</a>, <a href="https://alexlepauvre.github.io/AlexLepauvre/">Alex Lepauvre</a>, and <a href="https://www.birmingham.ac.uk/staff/profiles/psychology/ferrante-oscar">Oscar Ferrante</a> are three of many many scientists that comprise the <a href="https://www.arc-cogitate.com/">COGITATE</a> group. COGITATE is an adversarial collaboration project to test theories of consciousness in humans, in this case testing the integrated information theory of consciousness and the global neuronal workspace theory of consciousness. I said it's an adversarial collaboration, so what does that mean. It's adversarial in that two theories of consciousness are being pitted against each other. It's a collaboration in that the proponents of the two theories had to agree on what experiments could be performed that could possibly falsify the claims of either theory. The group has just published the results of the first round of experiments in a paper titled <a href="https://www.nature.com/articles/s41586-025-08888-1">Adversarial testing of global neuronal workspace and integrated information theories of consciousness</a>, and this is what Rony, Alex, and Oscar discuss with me today.</p>



<p>The short summary is that they used a simple task and measured brain activity with three different methods: EEG, MEG, and fMRI, and made predictions about where in the brain correlates of consciousness should be, how that activity should be maintained over time, and what kind of functional connectivity patterns should be present between brain regions. The take home is a mixed bag, with neither theory being fully falsified, but with a ton of data and results for the world to ponder and build on, to hopefully continue to refine and develop theoretical accounts of how brains and consciousness are related.</p>



<p>So we discuss the project itself, many of the challenges they faced, their experiences and reflections working on it and on coming together as a team, the nature of working on an adversarial collaboration, when so much is at stake for the proponents of each theory, and, as you heard last episode with <a href="https://braininspired.co/podcast/210/">Dean Buonomano</a>, when one of the theories, IIT, is surrounded by a bit of controversy itself regarding whether it should even be considered a scientific theory.</p>



<ul class="wp-block-list">
<li><a href="https://www.arc-cogitate.com/">COGITATE</a>.</li>



<li><a href="https://www.birmingham.ac.uk/staff/profiles/psychology/ferrante-oscar">Oscar Ferrante</a>. <a href="https://x.com/ferrante_oscar">@ferrante_oscar</a></li>



<li><a href="https://en-sagol.tau.ac.il/Rony_Hirschhorn">Rony Hirschhorn</a>. <a href="https://x.com/RonyHirsch">@RonyHirsch</a></li>



<li><a href="https://alexlepauvre.github.io/AlexLepauvre/">Alex Lepauvre</a>. <a href="https://x.com/lepauvrealex">@LepauvreAlex</a></li>



<li>Paper: <a href="https://www.nature.com/articles/s41586-025-08888-1">Adversarial testing of global neuronal workspace and integrated information theories of consciousness</a>.</li>



<li><a href="https://braininspired.co/podcast/210/">BI 210 Dean Buonomano: Consciousness, Time, and Organotypic Dynamics</a></li>
</ul>



<p>Read the <a href="https://www.thetransmitter.org/wp-content/uploads/2025/05/Brain-Inspired-211-transcript-1.pdf" target="_blank" rel="noreferrer noopener">transcript</a>.</p>



<p>0:00 - Intro
4:00 - COGITATE
17:42 - How the experiments were developed
32:37 - How data was collected and analyzed
41:24 - Prediction 1: Where is consciousness?
47:51 - The experimental task
1:00:14 - Prediction 2: Duration of consciousness-related activity
1:18:37 - Prediction 3: Inter-areal communication
1:28:28 - Big picture of the results
1:44:25 - Moving forward</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2493/211.mp3" length="115845022" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.



Rony Hirschhorn, Alex Lepauvre, and Oscar Ferrante are three of many many scientists that comprise the COGITATE group. COGITATE is an adversarial collaboration project to test theories of consciousness in humans, in this case testing the integrated information theory of consciousness and the global neuronal workspace theory of consciousness. I said it's an adversarial collaboration, so what does that mean. It's adversarial in that two theories of consciousness are being pitted against each other. It's a collaboration in that the proponents of the two theories had to agree on what experiments could be performed that could possibly falsify the claims of either theory. The group has just published the results of the first round of experiments in a paper titled Adversarial testing of global neuronal workspace and integrated information theories of consciousness, and this is what Rony, Alex, and Oscar discuss with me today.



The short summary is that they used a simple task and measured brain activity with three different methods: EEG, MEG, and fMRI, and made predictions about where in the brain correlates of consciousness should be, how that activity should be maintained over time, and what kind of functional connectivity patterns should be present between brain regions. The take home is a mixed bag, with neither theory being fully falsified, but with a ton of data and results for the world to ponder and build on, to hopefully continue to refine and develop theoretical accounts of how brains and consciousness are related.



So we discuss the project itself, many of the challenges they faced, their experiences and reflections working on it and on coming together as a team, the nature of working on an adversarial collaboration, when so much is at stake for the proponents of each theory, and, as you heard last episode with Dean Buonomano, when one of the theories, IIT, is surrounded by a bit of controversy itself regarding whether it should even be considered a scientific theory.




COGITATE.



Oscar Ferrante. @ferrante_oscar



Rony Hirschhorn. @RonyHirsch



Alex Lepauvre. @LepauvreAlex



Paper: Adversarial testing of global neuronal workspace and integrated information theories of consciousness.



BI 210 Dean Buonomano: Consciousness, Time, and Organotypic Dynamics




Read the transcript.



0:00 - Intro
4:00 - COGITATE
17:42 - How the experiments were developed
32:37 - How data was collected and analyzed
41:24 - Prediction 1: Where is consciousness?
47:51 - The experimental task
1:00:14 - Prediction 2: Duration of consciousness-related activity
1:18:37 - Prediction 3: Inter-areal communication
1:28:28 - Big picture of the results
1:44:25 - Moving forward]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2025/05/211-Cogitate-webthumb-2-web.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:59:40</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.



Rony Hirschhorn, Alex Lepauvre, and Oscar Ferrante are three of many many scientists that comprise the COGITATE group. COGITATE is an adversarial collaboration project to test theories of consciousness in humans, in this case testing the integrated information theory of consciousness and the global neuronal workspace theory of consciousness. I said it's an adversarial collaboration, so wha]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2025/05/211-Cogitate-webthumb-2-web.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 210 Dean Buonomano: Consciousness, Time, and Organotypic Dynamics</title>
	<link>https://braininspired.co/podcast/210/</link>
	<pubDate>Tue, 22 Apr 2025 04:00:00 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2486</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Dean Buonomano runs the Buonomano lab at UCLA. Dean was a guest on Brain Inspired way back on episode 18, where we talked about his book <a href="https://amzn.to/42hYfog">Your Brain is a Time Machine: The Neuroscience and Physics of Time</a>, which details much of his thought and research about how centrally important time is for virtually everything we do, different conceptions of time in philosophy, and how how brains might tell time. That was almost 7 years ago, and his work on time and dynamics in computational neuroscience continues.</p>



<p>One thing we discuss today, later in the episode, is his recent work using organotypic brain slices to test the idea that cortical circuits implement timing as a computational primitive it's something they do by they're very nature. Organotypic brain slices are between what I think of as traditional brain slices and full on organoids. Brain slices are extracted from an organism, and maintained in a brain-like fluid while you perform experiments on them. Organoids start with a small amount of cells that you the culture, and let them divide and grow and specialize, until you have a mass of cells that have grown into an organ of some sort, to then perform experiments on. Organotypic brain slices are extracted from an organism, like brain slices, but then also cultured for some time to let them settle back into some sort of near-homeostatic point - to them as close as you can to what they're like in the intact brain... then perform experiments on them. Dean and his colleagues use optigenetics to train their brain slices to predict the timing of the stimuli, and they find the populations of neurons do indeed learn to predict the timing of the stimuli, and that they exhibit replaying of those sequences similar to the replay seen in brain areas like the hippocampus.</p>



<p>But, we begin our conversation talking about Dean's recent piece in The Transmitter, that I'll point to in the show notes, called <a href="https://www.thetransmitter.org/neuroai/the-brain-holds-no-exclusive-rights-on-how-to-create-intelligence/">The brain holds no exclusive rights on how to create intelligence</a>. There he argues that modern AI is likely to continue its recent successes despite the ongoing divergence between AI and neuroscience. This is in contrast to what folks in NeuroAI believe.</p>



<p>We then talk about his recent chapter with physicist Carlo Rovelli, titled <a href="https://www.buonomanolab.com/Publications/BuonomanoRovelli_BridgingNeurosciencePhysicsTime_TimeScience_2023.pdf">Bridging the neuroscience and physics of time</a>, in which Dean and Carlo examine where neuroscience and physics disagree and where they agree about the nature of time.</p>



<p>Finally, we discuss Dean's thoughts on the integrated information theory of consciousness, or IIT. IIT has see a little controversy lately. Over 100 scientists, a large part of that group calling themselves IIT-Concerned, have expressed concern that IIT is actually unscientific. This has cause backlash and anti-backlash, and all sorts of fun expression from many interested people. Dean explains his own views about why he thinks IIT is not in the purview of science - namely that it doesn't play well with the existing ontology of what physics says about science. What I just said doesn't do justice to his arguments, which he articulates much better. </p>



<ul class="wp-block-list">
<li><a href="https://www.buonomanolab.com/">Buonomano lab</a>.</li>



<li>Twitter: <a href="https://x.com/DeanBuono">@DeanBuono</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.thetransmitter.org/neuroai/the-brain-holds-no-exclusive-rights-on-how-to-create-intelligence/">The brain holds no exclusive rights on how to create intelligence</a>.</li>



<li><a href="https://www.nature.com/articles/s41593-025-01881-x">What makes a theory of consciousness unscientific?</a></li>



<li><a href="https://www.nature.com/articles/s41467-025-58013-z">Ex vivo cortical circuits learn to predict and spontaneously replay temporal patterns</a>.</li>



<li><a href="https://www.buonomanolab.com/Publications/BuonomanoRovelli_BridgingNeurosciencePhysicsTime_TimeScience_2023.pdf">Bridging the neuroscience and physics of time</a>.</li>
</ul>
</li>



<li><a href="https://braininspired.co/podcast/204/">BI 204 David Robbe: Your Brain Doesn’t Measure Time</a></li>
</ul>



<p>Read the&nbsp;<a href="https://www.thetransmitter.org/wp-content/uploads/2025/04/Brain-inspired-210-transcript-dean-buonomano-final.pdf" target="_blank" rel="noreferrer noopener">transcript</a>.</p>



<p>0:00 - Intro
8:49 - AI doesn't need biology
17:52 - Time in physics and in neuroscience
34:04 - Integrated information theory
1:01:34 - Global neuronal workspace theory
1:07:46 - Organotypic slices and predictive processing
1:26:07 - Do brains actually measure time? David Robbe</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Dean Buonomano runs the Buonomano lab at UCLA. Dean was a guest on Brain Inspired way back on episode 18, where we talked about his book Your Brain is a Time Ma]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Dean Buonomano runs the Buonomano lab at UCLA. Dean was a guest on Brain Inspired way back on episode 18, where we talked about his book <a href="https://amzn.to/42hYfog">Your Brain is a Time Machine: The Neuroscience and Physics of Time</a>, which details much of his thought and research about how centrally important time is for virtually everything we do, different conceptions of time in philosophy, and how how brains might tell time. That was almost 7 years ago, and his work on time and dynamics in computational neuroscience continues.</p>



<p>One thing we discuss today, later in the episode, is his recent work using organotypic brain slices to test the idea that cortical circuits implement timing as a computational primitive it's something they do by they're very nature. Organotypic brain slices are between what I think of as traditional brain slices and full on organoids. Brain slices are extracted from an organism, and maintained in a brain-like fluid while you perform experiments on them. Organoids start with a small amount of cells that you the culture, and let them divide and grow and specialize, until you have a mass of cells that have grown into an organ of some sort, to then perform experiments on. Organotypic brain slices are extracted from an organism, like brain slices, but then also cultured for some time to let them settle back into some sort of near-homeostatic point - to them as close as you can to what they're like in the intact brain... then perform experiments on them. Dean and his colleagues use optigenetics to train their brain slices to predict the timing of the stimuli, and they find the populations of neurons do indeed learn to predict the timing of the stimuli, and that they exhibit replaying of those sequences similar to the replay seen in brain areas like the hippocampus.</p>



<p>But, we begin our conversation talking about Dean's recent piece in The Transmitter, that I'll point to in the show notes, called <a href="https://www.thetransmitter.org/neuroai/the-brain-holds-no-exclusive-rights-on-how-to-create-intelligence/">The brain holds no exclusive rights on how to create intelligence</a>. There he argues that modern AI is likely to continue its recent successes despite the ongoing divergence between AI and neuroscience. This is in contrast to what folks in NeuroAI believe.</p>



<p>We then talk about his recent chapter with physicist Carlo Rovelli, titled <a href="https://www.buonomanolab.com/Publications/BuonomanoRovelli_BridgingNeurosciencePhysicsTime_TimeScience_2023.pdf">Bridging the neuroscience and physics of time</a>, in which Dean and Carlo examine where neuroscience and physics disagree and where they agree about the nature of time.</p>



<p>Finally, we discuss Dean's thoughts on the integrated information theory of consciousness, or IIT. IIT has see a little controversy lately. Over 100 scientists, a large part of that group calling themselves IIT-Concerned, have expressed concern that IIT is actually unscientific. This has cause backlash and anti-backlash, and all sorts of fun expression from many interested people. Dean explains his own views about why he thinks IIT is not in the purview of science - namely that it doesn't play well with the existing ontology of what physics says about science. What I just said doesn't do justice to his arguments, which he articulates much better. </p>



<ul class="wp-block-list">
<li><a href="https://www.buonomanolab.com/">Buonomano lab</a>.</li>



<li>Twitter: <a href="https://x.com/DeanBuono">@DeanBuono</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.thetransmitter.org/neuroai/the-brain-holds-no-exclusive-rights-on-how-to-create-intelligence/">The brain holds no exclusive rights on how to create intelligence</a>.</li>



<li><a href="https://www.nature.com/articles/s41593-025-01881-x">What makes a theory of consciousness unscientific?</a></li>



<li><a href="https://www.nature.com/articles/s41467-025-58013-z">Ex vivo cortical circuits learn to predict and spontaneously replay temporal patterns</a>.</li>



<li><a href="https://www.buonomanolab.com/Publications/BuonomanoRovelli_BridgingNeurosciencePhysicsTime_TimeScience_2023.pdf">Bridging the neuroscience and physics of time</a>.</li>
</ul>
</li>



<li><a href="https://braininspired.co/podcast/204/">BI 204 David Robbe: Your Brain Doesn’t Measure Time</a></li>
</ul>



<p>Read the&nbsp;<a href="https://www.thetransmitter.org/wp-content/uploads/2025/04/Brain-inspired-210-transcript-dean-buonomano-final.pdf" target="_blank" rel="noreferrer noopener">transcript</a>.</p>



<p>0:00 - Intro
8:49 - AI doesn't need biology
17:52 - Time in physics and in neuroscience
34:04 - Integrated information theory
1:01:34 - Global neuronal workspace theory
1:07:46 - Organotypic slices and predictive processing
1:26:07 - Do brains actually measure time? David Robbe</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2486/210.mp3" length="106907242" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Dean Buonomano runs the Buonomano lab at UCLA. Dean was a guest on Brain Inspired way back on episode 18, where we talked about his book Your Brain is a Time Machine: The Neuroscience and Physics of Time, which details much of his thought and research about how centrally important time is for virtually everything we do, different conceptions of time in philosophy, and how how brains might tell time. That was almost 7 years ago, and his work on time and dynamics in computational neuroscience continues.



One thing we discuss today, later in the episode, is his recent work using organotypic brain slices to test the idea that cortical circuits implement timing as a computational primitive it's something they do by they're very nature. Organotypic brain slices are between what I think of as traditional brain slices and full on organoids. Brain slices are extracted from an organism, and maintained in a brain-like fluid while you perform experiments on them. Organoids start with a small amount of cells that you the culture, and let them divide and grow and specialize, until you have a mass of cells that have grown into an organ of some sort, to then perform experiments on. Organotypic brain slices are extracted from an organism, like brain slices, but then also cultured for some time to let them settle back into some sort of near-homeostatic point - to them as close as you can to what they're like in the intact brain... then perform experiments on them. Dean and his colleagues use optigenetics to train their brain slices to predict the timing of the stimuli, and they find the populations of neurons do indeed learn to predict the timing of the stimuli, and that they exhibit replaying of those sequences similar to the replay seen in brain areas like the hippocampus.



But, we begin our conversation talking about Dean's recent piece in The Transmitter, that I'll point to in the show notes, called The brain holds no exclusive rights on how to create intelligence. There he argues that modern AI is likely to continue its recent successes despite the ongoing divergence between AI and neuroscience. This is in contrast to what folks in NeuroAI believe.



We then talk about his recent chapter with physicist Carlo Rovelli, titled Bridging the neuroscience and physics of time, in which Dean and Carlo examine where neuroscience and physics disagree and where they agree about the nature of time.



Finally, we discuss Dean's thoughts on the integrated information theory of consciousness, or IIT. IIT has see a little controversy lately. Over 100 scientists, a large part of that group calling themselves IIT-Concerned, have expressed concern that IIT is actually unscientific. This has cause backlash and anti-backlash, and all sorts of fun expression from many interested people. Dean explains his own views about why he thinks IIT is not in the purview of science - namely that it doesn't play well with the existing ontology of what physics says about science. What I just said doesn't do justice to his arguments, which he articulates much better. 




Buonomano lab.



Twitter: @DeanBuono.



Related papers

The brain holds no exclusive rights on how to create intelligence.



What makes a theory of consciousness unscientific?



Ex vivo cortical circuits learn to predict and spontaneously replay temporal patterns.



Bridging the neuroscience and physics of time.





BI 204 David Robbe: Your Brain Doesn’t Measure Time




Read the&nbsp;transcript.



0:00 - Intro
8:49 - AI doesn't need biology
17:52 - Time in physics and in neuroscience
34:04 - Integrated information theory
1:01:34 - Global neuronal workspace theory
1:07:46 - Organotypic slices and predictive processing
1:26:07 - Do brains actually measure time? David Robbe]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2025/04/211-Dean-Buonomano-web.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:50:33</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Dean Buonomano runs the Buonomano lab at UCLA. Dean was a guest on Brain Inspired way back on episode 18, where we talked about his book Your Brain is a Time Machine: The Neuroscience and Physics of Time, which details much of his thought and research about how centrally important time is for virtually everything we do, different conceptions of time in philosophy, and how how brains might tell time. That was almost 7 years ago, and his work on time and dynamics in computational neuroscience continues.



One thing we discuss today, later in the episode, is his recent work using organotypic brain slices to test the idea that cortical circuits implement timing as a computational primitive it's something they do by they're very nature. Organotypic brain slices are between what I think of as traditional brain slices and full on organoids. Brain slices are extracted from an organism, and maintaine]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2025/04/211-Dean-Buonomano-web.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 209 Aran Nayebi: The NeuroAI Turing Test</title>
	<link>https://braininspired.co/podcast/209/</link>
	<pubDate>Wed, 09 Apr 2025 04:00:00 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2482</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for the “Brain Inspired” <a href="https://www.thetransmitter.org/newsletters/">email alerts</a> to be notified every time a new “Brain Inspired” episode is released<a href="https://www.thetransmitter.org/newsletters/">.</a></p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p>Aran Nayebi is an Assistant Professor at Carnegie Mellon University in the Machine Learning Department. He was there in the early days of using convolutional neural networks to explain how our brains perform object recognition, and since then he's a had a whirlwind trajectory through different AI architectures and algorithms and how they relate to biological architectures and algorithms, so we touch on some of what he has studied in that regard. But he also recently started his own lab, at CMU, and he has plans to integrate much of what he has learned to eventually develop autonomous agents that perform the tasks we want them to perform in similar at least ways that our brains perform them. So we discuss his ongoing plans to reverse-engineer our intelligence to build useful cognitive architectures of that sort.</p>



<p>We also discuss Aran's suggestion that, at least in the NeuroAI world, the Turing test needs to be updated to include some measure of similarity of the internal representations used to achieve the various tasks the models perform. By internal representations, as we discuss, he means the population-level activity in the neural networks, not the mental representations philosophy of mind often refers to, or other philosophical notions of the term representation.</p>



<ul class="wp-block-list">
<li><a href="https://anayebi.github.io/">Aran's Website</a>.</li>



<li>Twitter: <a href="https://www.twitter.com/aran_nayebi">@ayan_nayebi</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://arxiv.org/abs/2502.16238">Brain-model evaluations need the NeuroAI Turing Test</a>.</li>



<li><a href="https://arxiv.org/abs/2502.05934">Barriers and pathways to human-AI alignment: a game-theoretic approach</a>.</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
5:24 - Background
20:46 - Building embodied agents
33:00 - Adaptability
49:25 - Marr's levels
54:12 - Sensorimotor loop and intrinsic goals
1:00:05 - NeuroAI Turing Test
1:18:18 - Representations
1:28:18 - How to know what to measure
1:32:56 - AI safety</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. ]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for the “Brain Inspired” <a href="https://www.thetransmitter.org/newsletters/">email alerts</a> to be notified every time a new “Brain Inspired” episode is released<a href="https://www.thetransmitter.org/newsletters/">.</a></p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p>Aran Nayebi is an Assistant Professor at Carnegie Mellon University in the Machine Learning Department. He was there in the early days of using convolutional neural networks to explain how our brains perform object recognition, and since then he's a had a whirlwind trajectory through different AI architectures and algorithms and how they relate to biological architectures and algorithms, so we touch on some of what he has studied in that regard. But he also recently started his own lab, at CMU, and he has plans to integrate much of what he has learned to eventually develop autonomous agents that perform the tasks we want them to perform in similar at least ways that our brains perform them. So we discuss his ongoing plans to reverse-engineer our intelligence to build useful cognitive architectures of that sort.</p>



<p>We also discuss Aran's suggestion that, at least in the NeuroAI world, the Turing test needs to be updated to include some measure of similarity of the internal representations used to achieve the various tasks the models perform. By internal representations, as we discuss, he means the population-level activity in the neural networks, not the mental representations philosophy of mind often refers to, or other philosophical notions of the term representation.</p>



<ul class="wp-block-list">
<li><a href="https://anayebi.github.io/">Aran's Website</a>.</li>



<li>Twitter: <a href="https://www.twitter.com/aran_nayebi">@ayan_nayebi</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://arxiv.org/abs/2502.16238">Brain-model evaluations need the NeuroAI Turing Test</a>.</li>



<li><a href="https://arxiv.org/abs/2502.05934">Barriers and pathways to human-AI alignment: a game-theoretic approach</a>.</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
5:24 - Background
20:46 - Building embodied agents
33:00 - Adaptability
49:25 - Marr's levels
54:12 - Sensorimotor loop and intrinsic goals
1:00:05 - NeuroAI Turing Test
1:18:18 - Representations
1:28:18 - How to know what to measure
1:32:56 - AI safety</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2482/209.mp3" length="100790804" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.



Aran Nayebi is an Assistant Professor at Carnegie Mellon University in the Machine Learning Department. He was there in the early days of using convolutional neural networks to explain how our brains perform object recognition, and since then he's a had a whirlwind trajectory through different AI architectures and algorithms and how they relate to biological architectures and algorithms, so we touch on some of what he has studied in that regard. But he also recently started his own lab, at CMU, and he has plans to integrate much of what he has learned to eventually develop autonomous agents that perform the tasks we want them to perform in similar at least ways that our brains perform them. So we discuss his ongoing plans to reverse-engineer our intelligence to build useful cognitive architectures of that sort.



We also discuss Aran's suggestion that, at least in the NeuroAI world, the Turing test needs to be updated to include some measure of similarity of the internal representations used to achieve the various tasks the models perform. By internal representations, as we discuss, he means the population-level activity in the neural networks, not the mental representations philosophy of mind often refers to, or other philosophical notions of the term representation.




Aran's Website.



Twitter: @ayan_nayebi.



Related papers

Brain-model evaluations need the NeuroAI Turing Test.



Barriers and pathways to human-AI alignment: a game-theoretic approach.






0:00 - Intro
5:24 - Background
20:46 - Building embodied agents
33:00 - Adaptability
49:25 - Marr's levels
54:12 - Sensorimotor loop and intrinsic goals
1:00:05 - NeuroAI Turing Test
1:18:18 - Representations
1:28:18 - How to know what to measure
1:32:56 - AI safety]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2025/04/thumb-website.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:43:59</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.



Aran Nayebi is an Assistant Professor at Carnegie Mellon University in the Machine Learning Department. He was there in the early days of using convolutional neural networks to explain how our brains perform object recognition, and since then he's a had a whirlwind trajectory through different AI architectures and algorithms and how they relate to biological architectures and algor]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2025/04/thumb-website.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 208 Gabriele Scheler: From Verbal Thought to Neuron Computation</title>
	<link>https://braininspired.co/podcast/208/</link>
	<pubDate>Wed, 26 Mar 2025 04:00:00 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2474</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Gabriele Scheler co-founded the <a href="https://www.theoretical-biology.org/">Carl Correns Foundation for Mathematical Biology</a>. Carl Correns was her great grandfather, one of the early pioneers in genetics. Gabriele is a computational neuroscientist, whose goal is to build models of cellular computation, and much of her focus is on neurons.</p>



<p>We discuss her theoretical work building a new kind of single neuron model. She, like <a href="https://braininspired.co/podcast/205/">Dmitri Chklovskii</a> a few episodes ago, believes we've been stuck with essentially the same family of models for a neuron for a long time, despite minor variations on those models. The model Gabriele is working on, for example, respects the computations going on not only externally, via spiking, which has been the only game in town forever, but also the computations going on within the cell itself. Gabriele is in line with previous guests like <a href="https://braininspired.co/podcast/126/">Randy Gallistel</a>, <a href="https://braininspired.co/podcast/172/">David Glanzman</a>, and <a href="https://braininspired.co/podcast/199/">Hessam Akhlaghpour</a>, who argue that we need to pay attention to how neurons are computing various things internally and how that affects our cognition. Gabriele also believes the new neuron model she's developing will improve AI, drastically simplifying the models by providing them with smarter neurons, essentially.</p>



<p>We also discuss the importance of neuromodulation, her interest in wanting to understand how we think via our internal verbal monologue, her lifelong interest in language in general, what she thinks about LLMs, why she decided to start her own foundation to fund her science, what that experience has been like so far. Gabriele has been working on these topics for many years, and as you'll hear in a moment, she was there when computational neuroscience was just starting to pop up in a few places, when it was a nascent field, unlike its current ubiquity in neuroscience.</p>



<ul class="wp-block-list">
<li><a href="https://gabriele-scheler.mystrikingly.com/">Gabriele's website</a>.</li>



<li><a href="https://www.theoretical-biology.org/">Carl Correns Foundation for Mathematical Biology</a>.
<ul class="wp-block-list">
<li><a href="https://braincentric.ai">Neuro-AI spinoff</a></li>
</ul>
</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://arxiv.org/pdf/2209.06865">Sketch of a novel approach to a neural model</a>.</li>



<li><a href="https://www.biorxiv.org/content/10.1101/658153v5">Localist neural plasticity identified by mutual information</a>.</li>
</ul>
</li>



<li>Related episodes
<ul class="wp-block-list">
<li><a href="https://braininspired.co/podcast/199/">BI 199 Hessam Akhlaghpour: Natural Universal Computation</a></li>



<li><a href="https://braininspired.co/podcast/172/">BI 172 David Glanzman: Memory All The Way Down</a></li>



<li><a href="https://braininspired.co/podcast/126/">BI 126 Randy Gallistel: Where Is the Engram?</a></li>
</ul>
</li>
</ul>



<p>0:00 - Intro
4:41 - Gabriele's early interests in verbal thinking
14:14 - What is thinking?
24:04 - Starting one's own foundation
58:18 - Building a new single neuron model
1:19:25 - The right level of abstraction
1:25:00 - How a new neuron would change AI</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Gabriele Scheler co-founded the Carl Correns Foundation for Mathematical Biology. Carl Correns was her great grandfather, one of the early pioneers in genetics.]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Gabriele Scheler co-founded the <a href="https://www.theoretical-biology.org/">Carl Correns Foundation for Mathematical Biology</a>. Carl Correns was her great grandfather, one of the early pioneers in genetics. Gabriele is a computational neuroscientist, whose goal is to build models of cellular computation, and much of her focus is on neurons.</p>



<p>We discuss her theoretical work building a new kind of single neuron model. She, like <a href="https://braininspired.co/podcast/205/">Dmitri Chklovskii</a> a few episodes ago, believes we've been stuck with essentially the same family of models for a neuron for a long time, despite minor variations on those models. The model Gabriele is working on, for example, respects the computations going on not only externally, via spiking, which has been the only game in town forever, but also the computations going on within the cell itself. Gabriele is in line with previous guests like <a href="https://braininspired.co/podcast/126/">Randy Gallistel</a>, <a href="https://braininspired.co/podcast/172/">David Glanzman</a>, and <a href="https://braininspired.co/podcast/199/">Hessam Akhlaghpour</a>, who argue that we need to pay attention to how neurons are computing various things internally and how that affects our cognition. Gabriele also believes the new neuron model she's developing will improve AI, drastically simplifying the models by providing them with smarter neurons, essentially.</p>



<p>We also discuss the importance of neuromodulation, her interest in wanting to understand how we think via our internal verbal monologue, her lifelong interest in language in general, what she thinks about LLMs, why she decided to start her own foundation to fund her science, what that experience has been like so far. Gabriele has been working on these topics for many years, and as you'll hear in a moment, she was there when computational neuroscience was just starting to pop up in a few places, when it was a nascent field, unlike its current ubiquity in neuroscience.</p>



<ul class="wp-block-list">
<li><a href="https://gabriele-scheler.mystrikingly.com/">Gabriele's website</a>.</li>



<li><a href="https://www.theoretical-biology.org/">Carl Correns Foundation for Mathematical Biology</a>.
<ul class="wp-block-list">
<li><a href="https://braincentric.ai">Neuro-AI spinoff</a></li>
</ul>
</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://arxiv.org/pdf/2209.06865">Sketch of a novel approach to a neural model</a>.</li>



<li><a href="https://www.biorxiv.org/content/10.1101/658153v5">Localist neural plasticity identified by mutual information</a>.</li>
</ul>
</li>



<li>Related episodes
<ul class="wp-block-list">
<li><a href="https://braininspired.co/podcast/199/">BI 199 Hessam Akhlaghpour: Natural Universal Computation</a></li>



<li><a href="https://braininspired.co/podcast/172/">BI 172 David Glanzman: Memory All The Way Down</a></li>



<li><a href="https://braininspired.co/podcast/126/">BI 126 Randy Gallistel: Where Is the Engram?</a></li>
</ul>
</li>
</ul>



<p>0:00 - Intro
4:41 - Gabriele's early interests in verbal thinking
14:14 - What is thinking?
24:04 - Starting one's own foundation
58:18 - Building a new single neuron model
1:19:25 - The right level of abstraction
1:25:00 - How a new neuron would change AI</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2474/208.mp3" length="92178725" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Gabriele Scheler co-founded the Carl Correns Foundation for Mathematical Biology. Carl Correns was her great grandfather, one of the early pioneers in genetics. Gabriele is a computational neuroscientist, whose goal is to build models of cellular computation, and much of her focus is on neurons.



We discuss her theoretical work building a new kind of single neuron model. She, like Dmitri Chklovskii a few episodes ago, believes we've been stuck with essentially the same family of models for a neuron for a long time, despite minor variations on those models. The model Gabriele is working on, for example, respects the computations going on not only externally, via spiking, which has been the only game in town forever, but also the computations going on within the cell itself. Gabriele is in line with previous guests like Randy Gallistel, David Glanzman, and Hessam Akhlaghpour, who argue that we need to pay attention to how neurons are computing various things internally and how that affects our cognition. Gabriele also believes the new neuron model she's developing will improve AI, drastically simplifying the models by providing them with smarter neurons, essentially.



We also discuss the importance of neuromodulation, her interest in wanting to understand how we think via our internal verbal monologue, her lifelong interest in language in general, what she thinks about LLMs, why she decided to start her own foundation to fund her science, what that experience has been like so far. Gabriele has been working on these topics for many years, and as you'll hear in a moment, she was there when computational neuroscience was just starting to pop up in a few places, when it was a nascent field, unlike its current ubiquity in neuroscience.




Gabriele's website.



Carl Correns Foundation for Mathematical Biology.

Neuro-AI spinoff





Related papers

Sketch of a novel approach to a neural model.



Localist neural plasticity identified by mutual information.





Related episodes

BI 199 Hessam Akhlaghpour: Natural Universal Computation



BI 172 David Glanzman: Memory All The Way Down



BI 126 Randy Gallistel: Where Is the Engram?






0:00 - Intro
4:41 - Gabriele's early interests in verbal thinking
14:14 - What is thinking?
24:04 - Starting one's own foundation
58:18 - Building a new single neuron model
1:19:25 - The right level of abstraction
1:25:00 - How a new neuron would change AI]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2025/03/website-thumb.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:35:08</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Gabriele Scheler co-founded the Carl Correns Foundation for Mathematical Biology. Carl Correns was her great grandfather, one of the early pioneers in genetics. Gabriele is a computational neuroscientist, whose goal is to build models of cellular computation, and much of her focus is on neurons.



We discuss her theoretical work building a new kind of single neuron model. She, like Dmitri Chklovskii a few episodes ago, believes we've been stuck with essentially the same family of models for a neuron for a long time, despite minor variations on those models. The model Gabriele is working on, for example, respects the computations going on not only externally, via spiking, which has been the only game in town forever, but also the computations going on within the cell itself. Gabriele is in line with previous guests like Randy Gallistel, David Glanzman, and Hessam Akhlaghpour, who argue that w]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2025/03/website-thumb.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 207 Alison Preston: Schemas in our Brains and Minds</title>
	<link>https://braininspired.co/podcast/207/</link>
	<pubDate>Wed, 12 Mar 2025 04:00:00 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2471</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for the <a href="https://www.thetransmitter.org/newsletters/">“Brain Inspired” email alerts</a> to be notified every time a new “Brain Inspired” episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>





<p>The concept of a schema goes back at least to the philosopher Immanuel Kant in the 1700s, who use the term to refer to a kind of built-in mental framework to organize sensory experience. But it was the psychologist Frederic Bartlett in the 1930s who used the term schema in a psychological sense, to explain how our memories are organized and how new information gets integrated into our memory. Fast forward another 100 years to today, and we have a podcast episode with my guest today, Alison Preston, who runs the Preston Lab at the University of Texas at Austin. On this episode, we discuss her <em>neuroscience</em> research explaining how our brains might carry out the processing that fits with our modern conception of schemas, and how our brains do that in different ways as we develop from childhood to adulthood.</p>



<p>I just said, "our modern conception of schemas," but like everything else, there isn't complete consensus among scientists exactly how to define schema. Ali has her own definition. She shares that, and how it differs from other conceptions commonly used. I like Ali's version and think it should be adopted, in part because it helps distinguish schemas from a related term, cognitive maps, which we've discussed aplenty on brain inspired, and can sometimes be used interchangeably with schemas. So we discuss how to think about schemas versus cognitive maps, versus concepts, versus semantic information, and so on.</p>



<p><a href="https://braininspired.co/podcast/206/">Last episode Ciara Greene</a> discussed schemas and how they underlie our memories, and learning, and predictions, and how they can lead to inaccurate memories and predictions. Today Ali explains how circuits in the brain might adaptively underlie this process as we develop, and how to go about measuring it in the first place.</p>



<ul class="wp-block-list">
<li><a href="https://preston.clm.utexas.edu/">Preston Lab</a></li>



<li>Twitter:&nbsp;<a href="https://twitter.com/preston_lab">@preston_lab</a></li>



<li>Related papers:
<ul class="wp-block-list">
<li><a href="https://clm.utexas.edu/preston/wp-content/uploads/2022/05/1-s2.0-S235215462030190X-main.pdf">Concept formation as a computational cognitive process</a>.</li>



<li><a href="https://preston.clm.utexas.edu/wp-content/uploads/2023/08/Concept-formation-as-a-computational-cognitive-process.pdf">Schema, Inference, and Memory</a>.</li>



<li><a href="https://clm.utexas.edu/preston/wp-content/uploads/2022/05/s41562-021-01206-5.pdf">Developmental differences in memory reactivation relate to encoding and inference in the human brain</a>.</li>
</ul>
</li>
</ul>



<p>Read the <a href="https://www.thetransmitter.org/wp-content/uploads/2025/03/BI-207-transcript-proof.pdf">transcript</a>.</p>



<p>0:00 - Intro
6:51 - Schemas
20:37 - Schemas and the developing brain
35:03 - Information theory, dimensionality, and detail
41:17 - Geometry of schemas
47:26 - Schemas and creativity
50:29 - Brain connection pruning with development
1:02:46 - Information in brains
1:09:20 - Schemas and development in AI</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Vi]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for the <a href="https://www.thetransmitter.org/newsletters/">“Brain Inspired” email alerts</a> to be notified every time a new “Brain Inspired” episode is released.</p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>





<p>The concept of a schema goes back at least to the philosopher Immanuel Kant in the 1700s, who use the term to refer to a kind of built-in mental framework to organize sensory experience. But it was the psychologist Frederic Bartlett in the 1930s who used the term schema in a psychological sense, to explain how our memories are organized and how new information gets integrated into our memory. Fast forward another 100 years to today, and we have a podcast episode with my guest today, Alison Preston, who runs the Preston Lab at the University of Texas at Austin. On this episode, we discuss her <em>neuroscience</em> research explaining how our brains might carry out the processing that fits with our modern conception of schemas, and how our brains do that in different ways as we develop from childhood to adulthood.</p>



<p>I just said, "our modern conception of schemas," but like everything else, there isn't complete consensus among scientists exactly how to define schema. Ali has her own definition. She shares that, and how it differs from other conceptions commonly used. I like Ali's version and think it should be adopted, in part because it helps distinguish schemas from a related term, cognitive maps, which we've discussed aplenty on brain inspired, and can sometimes be used interchangeably with schemas. So we discuss how to think about schemas versus cognitive maps, versus concepts, versus semantic information, and so on.</p>



<p><a href="https://braininspired.co/podcast/206/">Last episode Ciara Greene</a> discussed schemas and how they underlie our memories, and learning, and predictions, and how they can lead to inaccurate memories and predictions. Today Ali explains how circuits in the brain might adaptively underlie this process as we develop, and how to go about measuring it in the first place.</p>



<ul class="wp-block-list">
<li><a href="https://preston.clm.utexas.edu/">Preston Lab</a></li>



<li>Twitter:&nbsp;<a href="https://twitter.com/preston_lab">@preston_lab</a></li>



<li>Related papers:
<ul class="wp-block-list">
<li><a href="https://clm.utexas.edu/preston/wp-content/uploads/2022/05/1-s2.0-S235215462030190X-main.pdf">Concept formation as a computational cognitive process</a>.</li>



<li><a href="https://preston.clm.utexas.edu/wp-content/uploads/2023/08/Concept-formation-as-a-computational-cognitive-process.pdf">Schema, Inference, and Memory</a>.</li>



<li><a href="https://clm.utexas.edu/preston/wp-content/uploads/2022/05/s41562-021-01206-5.pdf">Developmental differences in memory reactivation relate to encoding and inference in the human brain</a>.</li>
</ul>
</li>
</ul>



<p>Read the <a href="https://www.thetransmitter.org/wp-content/uploads/2025/03/BI-207-transcript-proof.pdf">transcript</a>.</p>



<p>0:00 - Intro
6:51 - Schemas
20:37 - Schemas and the developing brain
35:03 - Information theory, dimensionality, and detail
41:17 - Geometry of schemas
47:26 - Schemas and creativity
50:29 - Brain connection pruning with development
1:02:46 - Information in brains
1:09:20 - Schemas and development in AI</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2471/207.mp3" length="87126321" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.





The concept of a schema goes back at least to the philosopher Immanuel Kant in the 1700s, who use the term to refer to a kind of built-in mental framework to organize sensory experience. But it was the psychologist Frederic Bartlett in the 1930s who used the term schema in a psychological sense, to explain how our memories are organized and how new information gets integrated into our memory. Fast forward another 100 years to today, and we have a podcast episode with my guest today, Alison Preston, who runs the Preston Lab at the University of Texas at Austin. On this episode, we discuss her neuroscience research explaining how our brains might carry out the processing that fits with our modern conception of schemas, and how our brains do that in different ways as we develop from childhood to adulthood.



I just said, "our modern conception of schemas," but like everything else, there isn't complete consensus among scientists exactly how to define schema. Ali has her own definition. She shares that, and how it differs from other conceptions commonly used. I like Ali's version and think it should be adopted, in part because it helps distinguish schemas from a related term, cognitive maps, which we've discussed aplenty on brain inspired, and can sometimes be used interchangeably with schemas. So we discuss how to think about schemas versus cognitive maps, versus concepts, versus semantic information, and so on.



Last episode Ciara Greene discussed schemas and how they underlie our memories, and learning, and predictions, and how they can lead to inaccurate memories and predictions. Today Ali explains how circuits in the brain might adaptively underlie this process as we develop, and how to go about measuring it in the first place.




Preston Lab



Twitter:&nbsp;@preston_lab



Related papers:

Concept formation as a computational cognitive process.



Schema, Inference, and Memory.



Developmental differences in memory reactivation relate to encoding and inference in the human brain.






Read the transcript.



0:00 - Intro
6:51 - Schemas
20:37 - Schemas and the developing brain
35:03 - Information theory, dimensionality, and detail
41:17 - Geometry of schemas
47:26 - Schemas and creativity
50:29 - Brain connection pruning with development
1:02:46 - Information in brains
1:09:20 - Schemas and development in AI]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2025/03/thumb-1.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:29:47</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.





The concept of a schema goes back at least to the philosopher Immanuel Kant in the 1700s, who use the term to refer to a kind of built-in mental framework to organize sensory experience. But it was the psychologist Frederic Bartlett in the 1930s who used the term schema in a psychological sense, to explain how our memories are organized and how new information gets integrated into ]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2025/03/thumb-1.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>Quick Announcement: Complexity Group</title>
	<link>https://braininspired.co/podcast/quick-announcement-complexity-group/</link>
	<pubDate>Wed, 05 Mar 2025 00:08:34 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2459</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong>Here's the link to learn more and sign up:</strong></p>



<p class="has-text-align-center"><strong><a href="https://braininspired.co/complexity-group-email/">Complexity Group Email</a> List.</strong></p>]]></description>
	<itunes:subtitle><![CDATA[Heres the link to learn more and sign up:



Complexity Group Email List.]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong>Here's the link to learn more and sign up:</strong></p>



<p class="has-text-align-center"><strong><a href="https://braininspired.co/complexity-group-email/">Complexity Group Email</a> List.</strong></p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2459/quick-announcement-complexity-group.mp3" length="13213036" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Here's the link to learn more and sign up:



Complexity Group Email List.]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2025/03/FP_2x2_covers.png"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>00:06:47</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Here's the link to learn more and sign up:



Complexity Group Email List.]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2025/03/FP_2x2_covers.png"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 206 Ciara Greene: Memories Are Useful, Not Accurate</title>
	<link>https://braininspired.co/podcast/206/</link>
	<pubDate>Wed, 26 Feb 2025 05:00:00 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2428</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>Ciara Greene is Associate Professor in the University College Dublin School of Psychology. In this episode we discuss Ciara's book <a href="https://amzn.to/42l4msv">Memory Lane: The Perfectly Imperfect Ways We Remember</a>, co-authored by her colleague Gillian Murphy. The book is all about how human episodic memory works and why it works the way it does. Contrary to our common assumption, a "good memory" isn't necessarily highly accurate - we don't store memories like files in a filing cabinet. Instead our memories evolved to help us function in the world. That means our memories are flexible, constantly changing, and that forgetting can be beneficial, for example.</p>



<p>Regarding how our memories work, we discuss how memories are reconstructed each time we access them, and the role of schemas in organizing our episodic memories within the context of our previous experiences. Because our memories evolved for function and not accuracy, there's a wide range of flexibility in how we process and store memories. We're all susceptible to misinformation, all our memories are affected by our emotional states, and so on. Ciara's research explores many of the ways our memories are shaped by these various conditions, and how we should better understand our own and other's memories.</p>



<ul class="wp-block-list">
<li><a href="https://ucdattentionmemory.com/">Attention and Memory Lab</a></li>



<li>Twitter:&nbsp;<a href="https://x.com/ciaragreene01">@ciaragreene01</a>.</li>



<li>Book: <a href="https://amzn.to/42l4msv">Memory Lane: The Perfectly Imperfect Ways We Remember</a></li>
</ul>



<p>Read the <a href="https://www.thetransmitter.org/wp-content/uploads/2025/02/BI-206-transcript-final.pdf" target="_blank" rel="noreferrer noopener">transcript</a>.</p>



<p>0:00 - Intro
5:35 - The function of memory
6:41 - Reconstructive nature of memory
13:50 - Memory schemas, highly superior autobiographical memory
20:49 - Misremembering and flashbulb memories
27:52 - Forgetting and schemas
36:06 - What is a "good" memory?
39:35 - Memories and intention
43:47 - Memory and context
49:55 - Implanting false memories
1:04:10 - Memory suggestion during interrogations
1:06:30 - Memory, imagination, and creativity
1:13:45 - Artificial intelligence and memory
1:21:21 - Driven by questions</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











Ciara Greene is Associate Professor in the University College Dublin School of Psychology. In this episode we discuss Ciaras book Memory Lane: The Perfectly I]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>Ciara Greene is Associate Professor in the University College Dublin School of Psychology. In this episode we discuss Ciara's book <a href="https://amzn.to/42l4msv">Memory Lane: The Perfectly Imperfect Ways We Remember</a>, co-authored by her colleague Gillian Murphy. The book is all about how human episodic memory works and why it works the way it does. Contrary to our common assumption, a "good memory" isn't necessarily highly accurate - we don't store memories like files in a filing cabinet. Instead our memories evolved to help us function in the world. That means our memories are flexible, constantly changing, and that forgetting can be beneficial, for example.</p>



<p>Regarding how our memories work, we discuss how memories are reconstructed each time we access them, and the role of schemas in organizing our episodic memories within the context of our previous experiences. Because our memories evolved for function and not accuracy, there's a wide range of flexibility in how we process and store memories. We're all susceptible to misinformation, all our memories are affected by our emotional states, and so on. Ciara's research explores many of the ways our memories are shaped by these various conditions, and how we should better understand our own and other's memories.</p>



<ul class="wp-block-list">
<li><a href="https://ucdattentionmemory.com/">Attention and Memory Lab</a></li>



<li>Twitter:&nbsp;<a href="https://x.com/ciaragreene01">@ciaragreene01</a>.</li>



<li>Book: <a href="https://amzn.to/42l4msv">Memory Lane: The Perfectly Imperfect Ways We Remember</a></li>
</ul>



<p>Read the <a href="https://www.thetransmitter.org/wp-content/uploads/2025/02/BI-206-transcript-final.pdf" target="_blank" rel="noreferrer noopener">transcript</a>.</p>



<p>0:00 - Intro
5:35 - The function of memory
6:41 - Reconstructive nature of memory
13:50 - Memory schemas, highly superior autobiographical memory
20:49 - Misremembering and flashbulb memories
27:52 - Forgetting and schemas
36:06 - What is a "good" memory?
39:35 - Memories and intention
43:47 - Memory and context
49:55 - Implanting false memories
1:04:10 - Memory suggestion during interrogations
1:06:30 - Memory, imagination, and creativity
1:13:45 - Artificial intelligence and memory
1:21:21 - Driven by questions</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2428/206.mp3" length="86934394" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











Ciara Greene is Associate Professor in the University College Dublin School of Psychology. In this episode we discuss Ciara's book Memory Lane: The Perfectly Imperfect Ways We Remember, co-authored by her colleague Gillian Murphy. The book is all about how human episodic memory works and why it works the way it does. Contrary to our common assumption, a "good memory" isn't necessarily highly accurate - we don't store memories like files in a filing cabinet. Instead our memories evolved to help us function in the world. That means our memories are flexible, constantly changing, and that forgetting can be beneficial, for example.



Regarding how our memories work, we discuss how memories are reconstructed each time we access them, and the role of schemas in organizing our episodic memories within the context of our previous experiences. Because our memories evolved for function and not accuracy, there's a wide range of flexibility in how we process and store memories. We're all susceptible to misinformation, all our memories are affected by our emotional states, and so on. Ciara's research explores many of the ways our memories are shaped by these various conditions, and how we should better understand our own and other's memories.




Attention and Memory Lab



Twitter:&nbsp;@ciaragreene01.



Book: Memory Lane: The Perfectly Imperfect Ways We Remember




Read the transcript.



0:00 - Intro
5:35 - The function of memory
6:41 - Reconstructive nature of memory
13:50 - Memory schemas, highly superior autobiographical memory
20:49 - Misremembering and flashbulb memories
27:52 - Forgetting and schemas
36:06 - What is a "good" memory?
39:35 - Memories and intention
43:47 - Memory and context
49:55 - Implanting false memories
1:04:10 - Memory suggestion during interrogations
1:06:30 - Memory, imagination, and creativity
1:13:45 - Artificial intelligence and memory
1:21:21 - Driven by questions]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2025/02/thumb-web.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:29:10</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











Ciara Greene is Associate Professor in the University College Dublin School of Psychology. In this episode we discuss Ciara's book Memory Lane: The Perfectly Imperfect Ways We Remember, co-authored by her colleague Gillian Murphy. The book is all about how human episodic memory works and why it works the way it does. Contrary to our common assumption, a "good memory" isn't necessarily highly accurate - we don't store memories like files in a filing cabinet. Instead our memories evolved to help us function in the world. That means our memories are flexible, constantly changing, and that forgetting can be beneficial, for example.



Regarding how our memories work, we discuss how memories are reconstructed each time we access them, and the role of schemas in organizing our episodic memories within the context of our previous experiences. Because our memories evolved for function and not accur]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2025/02/thumb-web.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 205 Dmitri Chklovskii: Neurons Are Smarter Than You Think</title>
	<link>https://braininspired.co/podcast/205/</link>
	<pubDate>Wed, 12 Feb 2025 05:00:00 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2425</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for the “Brain Inspired” <a href="https://www.thetransmitter.org/newsletters/">email alerts</a> to be notified every time a new “Brain Inspired” episode is released: </p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p>Since the 1940s and 50s, back at the origins of what we now think of as artificial intelligence, there have been lots of ways of conceiving what it is that brains do, or what the function of the brain is. One of those conceptions, going to back to cybernetics, is that the brain is a controller that operates under the principles of feedback control. This view has been carried down in various forms to us in present day. Also since that same time period, when McCulloch and Pitts suggested that single neurons are logical devices, there have been lots of ways of conceiving what it is that single neurons do. Are they logical operators, do they each represent something special, are they trying to maximize efficiency, for example?</p>



<p>Dmitri Chklovskii, who goes by Mitya, runs the Neural Circuits and Algorithms lab at the Flatiron Institute. Mitya believes that single neurons themselves are each individual controllers. They're smart agents, each trying to predict their inputs, like in predictive processing, but also functioning as an optimal feedback controller. We talk about historical conceptions of the function of single neurons and how this differs, we talk about how to think of single neurons versus populations of neurons, some of the neuroscience findings that seem to support Mitya's account, the control algorithm that simplifies the neuron's otherwise impossible control task, and other various topics.</p>



<p>We also discuss Mitya's early interests, coming from a physics and engineering background, in how to wire up our brains efficiently, given the limited amount of space in our craniums. Obviously evolution produced its own solutions for this problem. This pursuit led Mitya to study the C. elegans worm, because its connectome was nearly complete- actually, Mitya and his team helped complete the connectome so he'd have the whole wiring diagram to study it. So we talk about that work, and what knowing the whole connectome of C. elegans has and has not taught us about how brains work.</p>



<ul class="wp-block-list">
<li><a href="https://neural-circuits-and-algorithms.github.io/">Chklovskii Lab</a>.</li>



<li>Twitter:&nbsp;<a href="https://x.com/chklovskii">@chklovskii</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.biorxiv.org/content/10.1101/2024.01.02.573843v1">The Neuron as a Direct Data-Driven Controller</a>.</li>



<li><a href="https://www.pnas.org/doi/10.1073/pnas.2117484120">Normative and mechanistic model of an adaptive circuit for efficient encoding and feature extraction</a>.</li>
</ul>
</li>



<li>Related episodes
<ul class="wp-block-list">
<li><a href="https://braininspired.co/podcast/143/">BI 143 Rodolphe Sepulchre: Mixed Feedback Control</a></li>



<li><a href="https://braininspired.co/podcast/119/">BI 119 Henry Yin: The Crisis in Neuroscience</a></li>
</ul>
</li>
</ul>



<p>Read the <a href="https://www.thetransmitter.org/wp-content/uploads/2025/02/BI-205-transcript-production.pdf" target="_blank" rel="noreferrer noopener">transcript</a>.</p>



<p>0:00 - Intro
7:34 - Physicists approach for neuroscience
12:39 - What's missing in AI and neuroscience?
16:36 - Connectomes
31:51 - Understanding complex systems
33:17 - Earliest models of neurons
39:08 - Smart neurons
42:56 - Neuron theories that influenced Mitya
46:50 - Neuron as a controller
55:03 - How to test the neuron as controller hypothesis
1:00:29 - Direct data-driven control
1:11:09 - Experimental evidence
1:22:25 - Single neuron doctrine and population doctrine
1:25:30 - Neurons as agents
1:28:52 - Implications for AI
1:30:02 - Limits to control perspective</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. ]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for the “Brain Inspired” <a href="https://www.thetransmitter.org/newsletters/">email alerts</a> to be notified every time a new “Brain Inspired” episode is released: </p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p>Since the 1940s and 50s, back at the origins of what we now think of as artificial intelligence, there have been lots of ways of conceiving what it is that brains do, or what the function of the brain is. One of those conceptions, going to back to cybernetics, is that the brain is a controller that operates under the principles of feedback control. This view has been carried down in various forms to us in present day. Also since that same time period, when McCulloch and Pitts suggested that single neurons are logical devices, there have been lots of ways of conceiving what it is that single neurons do. Are they logical operators, do they each represent something special, are they trying to maximize efficiency, for example?</p>



<p>Dmitri Chklovskii, who goes by Mitya, runs the Neural Circuits and Algorithms lab at the Flatiron Institute. Mitya believes that single neurons themselves are each individual controllers. They're smart agents, each trying to predict their inputs, like in predictive processing, but also functioning as an optimal feedback controller. We talk about historical conceptions of the function of single neurons and how this differs, we talk about how to think of single neurons versus populations of neurons, some of the neuroscience findings that seem to support Mitya's account, the control algorithm that simplifies the neuron's otherwise impossible control task, and other various topics.</p>



<p>We also discuss Mitya's early interests, coming from a physics and engineering background, in how to wire up our brains efficiently, given the limited amount of space in our craniums. Obviously evolution produced its own solutions for this problem. This pursuit led Mitya to study the C. elegans worm, because its connectome was nearly complete- actually, Mitya and his team helped complete the connectome so he'd have the whole wiring diagram to study it. So we talk about that work, and what knowing the whole connectome of C. elegans has and has not taught us about how brains work.</p>



<ul class="wp-block-list">
<li><a href="https://neural-circuits-and-algorithms.github.io/">Chklovskii Lab</a>.</li>



<li>Twitter:&nbsp;<a href="https://x.com/chklovskii">@chklovskii</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.biorxiv.org/content/10.1101/2024.01.02.573843v1">The Neuron as a Direct Data-Driven Controller</a>.</li>



<li><a href="https://www.pnas.org/doi/10.1073/pnas.2117484120">Normative and mechanistic model of an adaptive circuit for efficient encoding and feature extraction</a>.</li>
</ul>
</li>



<li>Related episodes
<ul class="wp-block-list">
<li><a href="https://braininspired.co/podcast/143/">BI 143 Rodolphe Sepulchre: Mixed Feedback Control</a></li>



<li><a href="https://braininspired.co/podcast/119/">BI 119 Henry Yin: The Crisis in Neuroscience</a></li>
</ul>
</li>
</ul>



<p>Read the <a href="https://www.thetransmitter.org/wp-content/uploads/2025/02/BI-205-transcript-production.pdf" target="_blank" rel="noreferrer noopener">transcript</a>.</p>



<p>0:00 - Intro
7:34 - Physicists approach for neuroscience
12:39 - What's missing in AI and neuroscience?
16:36 - Connectomes
31:51 - Understanding complex systems
33:17 - Earliest models of neurons
39:08 - Smart neurons
42:56 - Neuron theories that influenced Mitya
46:50 - Neuron as a controller
55:03 - How to test the neuron as controller hypothesis
1:00:29 - Direct data-driven control
1:11:09 - Experimental evidence
1:22:25 - Single neuron doctrine and population doctrine
1:25:30 - Neurons as agents
1:28:52 - Implications for AI
1:30:02 - Limits to control perspective</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2425/205.mp3" length="96935517" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released: 



To explore more neuroscience news and perspectives, visit thetransmitter.org.



Since the 1940s and 50s, back at the origins of what we now think of as artificial intelligence, there have been lots of ways of conceiving what it is that brains do, or what the function of the brain is. One of those conceptions, going to back to cybernetics, is that the brain is a controller that operates under the principles of feedback control. This view has been carried down in various forms to us in present day. Also since that same time period, when McCulloch and Pitts suggested that single neurons are logical devices, there have been lots of ways of conceiving what it is that single neurons do. Are they logical operators, do they each represent something special, are they trying to maximize efficiency, for example?



Dmitri Chklovskii, who goes by Mitya, runs the Neural Circuits and Algorithms lab at the Flatiron Institute. Mitya believes that single neurons themselves are each individual controllers. They're smart agents, each trying to predict their inputs, like in predictive processing, but also functioning as an optimal feedback controller. We talk about historical conceptions of the function of single neurons and how this differs, we talk about how to think of single neurons versus populations of neurons, some of the neuroscience findings that seem to support Mitya's account, the control algorithm that simplifies the neuron's otherwise impossible control task, and other various topics.



We also discuss Mitya's early interests, coming from a physics and engineering background, in how to wire up our brains efficiently, given the limited amount of space in our craniums. Obviously evolution produced its own solutions for this problem. This pursuit led Mitya to study the C. elegans worm, because its connectome was nearly complete- actually, Mitya and his team helped complete the connectome so he'd have the whole wiring diagram to study it. So we talk about that work, and what knowing the whole connectome of C. elegans has and has not taught us about how brains work.




Chklovskii Lab.



Twitter:&nbsp;@chklovskii.



Related papers

The Neuron as a Direct Data-Driven Controller.



Normative and mechanistic model of an adaptive circuit for efficient encoding and feature extraction.





Related episodes

BI 143 Rodolphe Sepulchre: Mixed Feedback Control



BI 119 Henry Yin: The Crisis in Neuroscience






Read the transcript.



0:00 - Intro
7:34 - Physicists approach for neuroscience
12:39 - What's missing in AI and neuroscience?
16:36 - Connectomes
31:51 - Understanding complex systems
33:17 - Earliest models of neurons
39:08 - Smart neurons
42:56 - Neuron theories that influenced Mitya
46:50 - Neuron as a controller
55:03 - How to test the neuron as controller hypothesis
1:00:29 - Direct data-driven control
1:11:09 - Experimental evidence
1:22:25 - Single neuron doctrine and population doctrine
1:25:30 - Neurons as agents
1:28:52 - Implications for AI
1:30:02 - Limits to control perspective]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2025/02/thumb-websute.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:39:05</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released: 



To explore more neuroscience news and perspectives, visit thetransmitter.org.



Since the 1940s and 50s, back at the origins of what we now think of as artificial intelligence, there have been lots of ways of conceiving what it is that brains do, or what the function of the brain is. One of those conceptions, going to back to cybernetics, is that the brain is a controller that operates under the principles of feedback control. This view has been carried down ]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2025/02/thumb-websute.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 204 David Robbe: Your Brain Doesn&#8217;t Measure Time</title>
	<link>https://braininspired.co/podcast/204/</link>
	<pubDate>Wed, 29 Jan 2025 05:00:00 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2423</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for the <a href="https://www.thetransmitter.org/newsletters/">“Brain Inspired” email alerts</a> to be notified every time a new “Brain Inspired” episode is released: </p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p>When you play hide and seek, as you do on a regular basis I'm sure, and you count to ten before shouting, "Ready or not, here I come," how do you keep track of time? Is it a clock in your brain, as many neuroscientists assume and therefore search for in their research? Or is it something else? Maybe the rhythm of your vocalization as you say, "one-one thousand, two-one thousand"? Even if you’re counting silently, could it be that you’re imagining the movements of speaking aloud and tracking those virtual actions? My guest today, neuroscientist David Robbe, believes we don't rely on clocks in our brains, or measure time internally, or really that we measure time at all. Rather, our estimation of time emerges through our interactions with the world around us and/or the world within us as we behave.</p>



<p>David is group leader of the Cortical-Basal Ganglia Circuits and Behavior Lab at the Institute of Mediterranean Neurobiology. His perspective on how organisms measure time is the result of his own behavioral experiments with rodents, and by revisiting one of his favorite philosophers, Henri Bergson. So in this episode, we discuss how all of this came about - how neuroscientists have long searched for brain activity that measures or keeps track of time in areas like the basal ganglia, which is the brain region David focuses on, how the rodents he studies behave in surprising ways when he asks them to estimate time intervals, and how Bergson introduce the world to the notion of durée, our lived experience and feeling of time.</p>



<ul class="wp-block-list">
<li><a href="https://www.inmed.fr/en/en-avenir-dynamiques-neuronales-et-fonctions-des-ganglions-de-la-base">Cortical-Basal Ganglia Circuits and Behavior Lab</a>.</li>



<li>Twitter:&nbsp;<a href="https://x.com/dav_robbe">@dav_robbe</a></li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://amu.hal.science/hal-04225756/document">Lost in time: Relocating the perception of duration outside the brain</a>.</li>



<li><a href="https://www.biorxiv.org/content/10.1101/2024.05.31.596850v1">Running, Fast and Slow: The Dorsal Striatum Sets the Cost ofMovement During Foraging</a>.</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
3:59 - Why behavior is so important in itself
10:27 - Henri Bergson
21:17 - Bergson's view of life
26:25 - A task to test how animals time things
34:08 - Back to Bergson and duree
39:44 - Externalizing time
44:11 - Internal representation of time
1:03:38 - Cognition as internal movement
1:09:14 - Free will
1:15:27 - Implications for AI</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. ]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for the <a href="https://www.thetransmitter.org/newsletters/">“Brain Inspired” email alerts</a> to be notified every time a new “Brain Inspired” episode is released: </p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p>When you play hide and seek, as you do on a regular basis I'm sure, and you count to ten before shouting, "Ready or not, here I come," how do you keep track of time? Is it a clock in your brain, as many neuroscientists assume and therefore search for in their research? Or is it something else? Maybe the rhythm of your vocalization as you say, "one-one thousand, two-one thousand"? Even if you’re counting silently, could it be that you’re imagining the movements of speaking aloud and tracking those virtual actions? My guest today, neuroscientist David Robbe, believes we don't rely on clocks in our brains, or measure time internally, or really that we measure time at all. Rather, our estimation of time emerges through our interactions with the world around us and/or the world within us as we behave.</p>



<p>David is group leader of the Cortical-Basal Ganglia Circuits and Behavior Lab at the Institute of Mediterranean Neurobiology. His perspective on how organisms measure time is the result of his own behavioral experiments with rodents, and by revisiting one of his favorite philosophers, Henri Bergson. So in this episode, we discuss how all of this came about - how neuroscientists have long searched for brain activity that measures or keeps track of time in areas like the basal ganglia, which is the brain region David focuses on, how the rodents he studies behave in surprising ways when he asks them to estimate time intervals, and how Bergson introduce the world to the notion of durée, our lived experience and feeling of time.</p>



<ul class="wp-block-list">
<li><a href="https://www.inmed.fr/en/en-avenir-dynamiques-neuronales-et-fonctions-des-ganglions-de-la-base">Cortical-Basal Ganglia Circuits and Behavior Lab</a>.</li>



<li>Twitter:&nbsp;<a href="https://x.com/dav_robbe">@dav_robbe</a></li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://amu.hal.science/hal-04225756/document">Lost in time: Relocating the perception of duration outside the brain</a>.</li>



<li><a href="https://www.biorxiv.org/content/10.1101/2024.05.31.596850v1">Running, Fast and Slow: The Dorsal Striatum Sets the Cost ofMovement During Foraging</a>.</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
3:59 - Why behavior is so important in itself
10:27 - Henri Bergson
21:17 - Bergson's view of life
26:25 - A task to test how animals time things
34:08 - Back to Bergson and duree
39:44 - Externalizing time
44:11 - Internal representation of time
1:03:38 - Cognition as internal movement
1:09:14 - Free will
1:15:27 - Implications for AI</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2423/204.mp3" length="94690750" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released: 



To explore more neuroscience news and perspectives, visit thetransmitter.org.



When you play hide and seek, as you do on a regular basis I'm sure, and you count to ten before shouting, "Ready or not, here I come," how do you keep track of time? Is it a clock in your brain, as many neuroscientists assume and therefore search for in their research? Or is it something else? Maybe the rhythm of your vocalization as you say, "one-one thousand, two-one thousand"? Even if you’re counting silently, could it be that you’re imagining the movements of speaking aloud and tracking those virtual actions? My guest today, neuroscientist David Robbe, believes we don't rely on clocks in our brains, or measure time internally, or really that we measure time at all. Rather, our estimation of time emerges through our interactions with the world around us and/or the world within us as we behave.



David is group leader of the Cortical-Basal Ganglia Circuits and Behavior Lab at the Institute of Mediterranean Neurobiology. His perspective on how organisms measure time is the result of his own behavioral experiments with rodents, and by revisiting one of his favorite philosophers, Henri Bergson. So in this episode, we discuss how all of this came about - how neuroscientists have long searched for brain activity that measures or keeps track of time in areas like the basal ganglia, which is the brain region David focuses on, how the rodents he studies behave in surprising ways when he asks them to estimate time intervals, and how Bergson introduce the world to the notion of durée, our lived experience and feeling of time.




Cortical-Basal Ganglia Circuits and Behavior Lab.



Twitter:&nbsp;@dav_robbe



Related papers

Lost in time: Relocating the perception of duration outside the brain.



Running, Fast and Slow: The Dorsal Striatum Sets the Cost ofMovement During Foraging.






0:00 - Intro
3:59 - Why behavior is so important in itself
10:27 - Henri Bergson
21:17 - Bergson's view of life
26:25 - A task to test how animals time things
34:08 - Back to Bergson and duree
39:44 - Externalizing time
44:11 - Internal representation of time
1:03:38 - Cognition as internal movement
1:09:14 - Free will
1:15:27 - Implications for AI]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2025/01/204-David-Robbe-website.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:37:37</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released: 



To explore more neuroscience news and perspectives, visit thetransmitter.org.



When you play hide and seek, as you do on a regular basis I'm sure, and you count to ten before shouting, "Ready or not, here I come," how do you keep track of time? Is it a clock in your brain, as many neuroscientists assume and therefore search for in their research? Or is it something else? Maybe the rhythm of your vocalization as you say, "one-one thousand, two-one thousand"? ]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2025/01/204-David-Robbe-website.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 203 David Krakauer: How To Think Like a Complexity Scientist</title>
	<link>https://braininspired.co/podcast/203/</link>
	<pubDate>Tue, 14 Jan 2025 05:00:00 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2418</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p><a href="https://www.thetransmitter.org/">The Transmitter</a> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for the <a href="https://www.thetransmitter.org/newsletters/">“Brain Inspired” email alerts</a> to be notified every time a new “Brain Inspired” episode is released.</p>



<p>David Krakauer is the president of the Santa Fe Institute, where their mission is officially "Searching for Order in the Complexity of Evolving Worlds." When I think of the Santa Fe institute, I think of complexity science, because that is the common thread across the many subjects people study at SFI, like societies, economies, brains, machines, and evolution. David has been on before, and I invited him back to discuss some of the topics in his new book <a href="https://www.sfipress.org/books/the-complex-world">The Complex World: An Introduction to the Fundamentals of Complexity Science</a>.</p>



<p>The book on the one hand serves as an introduction and a guide to a 4 volume collection of foundational papers in complexity science, which you'll David discuss in a moment. On the other hand, The Complex World became much more, discussing and connecting ideas across the history of complexity science. Where did complexity science come from? How does it fit among other scientific paradigms? How did the breakthroughs come about? Along the way, we discuss the four pillars of complexity science - entropy, evolution, dynamics, and computation, and how complexity scientists draw from these four areas to study what David calls "problem-solving matter." We discuss emergence, the role of time scales, and plenty more all with my own self-serving goal to learn and practice how to think like a complexity scientist to improve my own work on how brains do things. Hopefully our conversation, and David's book, help you do the same.</p>





<ul class="wp-block-list">
<li><a href="https://davidckrakauer.com/">David's website</a>.</li>



<li><a href="https://www.santafe.edu/people/profile/david-krakauer">David's SFI homepage</a>.</li>



<li>The book: <a href="https://www.sfipress.org/books/the-complex-world">The Complex World: An Introduction to the Fundamentals of Complexity Science</a>.</li>



<li>The 4-Volume Series: <a href="https://www.sfipress.org/books/foundational-papers-in-complexity-science">Foundational Papers in Complexity Science.</a></li>



<li>Mentioned:
<ul class="wp-block-list">
<li>Aeon article: <a href="https://aeon.co/essays/is-life-a-complex-computational-process">Problem-solving matter</a>.</li>



<li><a href="https://link.springer.com/article/10.1007/s12064-020-00313-7">The information theory of individuality</a>.</li>
</ul>
</li>
</ul>



<p>Read the&nbsp;<a href="https://www.thetransmitter.org/wp-content/uploads/2025/01/BI-203-transcript-proof.pdf" target="_blank" rel="noreferrer noopener">transcript</a>.</p>



<p>0:00 - Intro
3:45 - Origins of The Complex World
20:10 - 4 pillars of complexity
36:27 - 40s to 70s in complexity
42:33 - How to proceed as a complexity scientist
54:32 - Broken symmetries
1:02:40 - Emergence
1:13:25 - Time scales and complexity
1:18:48 - Consensus and how ideas migrate
1:29:25 - Disciplinary matrix (Kuhn)
1:32:45 - Intelligence vs. life</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p><a href="https://www.thetransmitter.org/">The Transmitter</a> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for the <a href="https://www.thetransmitter.org/newsletters/">“Brain Inspired” email alerts</a> to be notified every time a new “Brain Inspired” episode is released.</p>



<p>David Krakauer is the president of the Santa Fe Institute, where their mission is officially "Searching for Order in the Complexity of Evolving Worlds." When I think of the Santa Fe institute, I think of complexity science, because that is the common thread across the many subjects people study at SFI, like societies, economies, brains, machines, and evolution. David has been on before, and I invited him back to discuss some of the topics in his new book <a href="https://www.sfipress.org/books/the-complex-world">The Complex World: An Introduction to the Fundamentals of Complexity Science</a>.</p>



<p>The book on the one hand serves as an introduction and a guide to a 4 volume collection of foundational papers in complexity science, which you'll David discuss in a moment. On the other hand, The Complex World became much more, discussing and connecting ideas across the history of complexity science. Where did complexity science come from? How does it fit among other scientific paradigms? How did the breakthroughs come about? Along the way, we discuss the four pillars of complexity science - entropy, evolution, dynamics, and computation, and how complexity scientists draw from these four areas to study what David calls "problem-solving matter." We discuss emergence, the role of time scales, and plenty more all with my own self-serving goal to learn and practice how to think like a complexity scientist to improve my own work on how brains do things. Hopefully our conversation, and David's book, help you do the same.</p>





<ul class="wp-block-list">
<li><a href="https://davidckrakauer.com/">David's website</a>.</li>



<li><a href="https://www.santafe.edu/people/profile/david-krakauer">David's SFI homepage</a>.</li>



<li>The book: <a href="https://www.sfipress.org/books/the-complex-world">The Complex World: An Introduction to the Fundamentals of Complexity Science</a>.</li>



<li>The 4-Volume Series: <a href="https://www.sfipress.org/books/foundational-papers-in-complexity-science">Foundational Papers in Complexity Science.</a></li>



<li>Mentioned:
<ul class="wp-block-list">
<li>Aeon article: <a href="https://aeon.co/essays/is-life-a-complex-computational-process">Problem-solving matter</a>.</li>



<li><a href="https://link.springer.com/article/10.1007/s12064-020-00313-7">The information theory of individuality</a>.</li>
</ul>
</li>
</ul>



<p>Read the&nbsp;<a href="https://www.thetransmitter.org/wp-content/uploads/2025/01/BI-203-transcript-proof.pdf" target="_blank" rel="noreferrer noopener">transcript</a>.</p>



<p>0:00 - Intro
3:45 - Origins of The Complex World
20:10 - 4 pillars of complexity
36:27 - 40s to 70s in complexity
42:33 - How to proceed as a complexity scientist
54:32 - Broken symmetries
1:02:40 - Emergence
1:13:25 - Time scales and complexity
1:18:48 - Consensus and how ideas migrate
1:29:25 - Disciplinary matrix (Kuhn)
1:32:45 - Intelligence vs. life</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2418/203.mp3" length="102990617" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released.



David Krakauer is the president of the Santa Fe Institute, where their mission is officially "Searching for Order in the Complexity of Evolving Worlds." When I think of the Santa Fe institute, I think of complexity science, because that is the common thread across the many subjects people study at SFI, like societies, economies, brains, machines, and evolution. David has been on before, and I invited him back to discuss some of the topics in his new book The Complex World: An Introduction to the Fundamentals of Complexity Science.



The book on the one hand serves as an introduction and a guide to a 4 volume collection of foundational papers in complexity science, which you'll David discuss in a moment. On the other hand, The Complex World became much more, discussing and connecting ideas across the history of complexity science. Where did complexity science come from? How does it fit among other scientific paradigms? How did the breakthroughs come about? Along the way, we discuss the four pillars of complexity science - entropy, evolution, dynamics, and computation, and how complexity scientists draw from these four areas to study what David calls "problem-solving matter." We discuss emergence, the role of time scales, and plenty more all with my own self-serving goal to learn and practice how to think like a complexity scientist to improve my own work on how brains do things. Hopefully our conversation, and David's book, help you do the same.






David's website.



David's SFI homepage.



The book: The Complex World: An Introduction to the Fundamentals of Complexity Science.



The 4-Volume Series: Foundational Papers in Complexity Science.



Mentioned:

Aeon article: Problem-solving matter.



The information theory of individuality.






Read the&nbsp;transcript.



0:00 - Intro
3:45 - Origins of The Complex World
20:10 - 4 pillars of complexity
36:27 - 40s to 70s in complexity
42:33 - How to proceed as a complexity scientist
54:32 - Broken symmetries
1:02:40 - Emergence
1:13:25 - Time scales and complexity
1:18:48 - Consensus and how ideas migrate
1:29:25 - Disciplinary matrix (Kuhn)
1:32:45 - Intelligence vs. life]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2025/01/example-thumb-website.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:46:03</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released.



David Krakauer is the president of the Santa Fe Institute, where their mission is officially "Searching for Order in the Complexity of Evolving Worlds." When I think of the Santa Fe institute, I think of complexity science, because that is the common thread across the many subjects people study at SFI, like societies, economies, brains, machines, and evolution. David has been on before, and I invited him back to discuss some of the topics in his new book The ]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2025/01/example-thumb-website.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 202 Eli Sennesh: Divide-and-Conquer to Predict</title>
	<link>https://braininspired.co/podcast/202/</link>
	<pubDate>Fri, 03 Jan 2025 05:00:00 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2411</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em><a href="http://thetransmitter.org/">The Transmitter</a></em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for the <a href="https://www.thetransmitter.org/newsletters/">“Brain Inspired” email alerts</a> to be notified every time a new Brain Inspired episode is released<a href="https://www.thetransmitter.org/newsletters/">.</a></p>









<p>Eli Sennesh is a postdoc at Vanderbilt University, one of my old stomping grounds, currently in the lab of Andre Bastos. Andre’s lab focuses on understanding brain dynamics within cortical circuits, particularly how communication between brain areas is coordinated in perception, cognition, and behavior. So Eli is busy doing work along those lines, as you'll hear more about. But the original impetus for having him on his recently published proposal for how predictive coding might be implemented in brains. So in that sense, this episode builds on the last episode with Rajesh Rao, where we discussed Raj's "active predictive coding" account of predictive coding.&nbsp; As a super brief refresher, predictive coding is the proposal that the brain is constantly predicting what's about the happen, then stuff happens, and the brain uses the mismatch between its predictions and the actual stuff that's happening, to learn how to make better predictions moving forward. I refer you to the previous episode for more details. So Eli's account, along with his co-authors of course, which he calls "divide-and-conquer" predictive coding, uses a probabilistic approach in an attempt to account for how brains might implement predictive coding, and you'll learn more about that in our discussion. But we also talk quite a bit about the difference between practicing theoretical and experimental neuroscience, and Eli's experience moving into the experimental side from the theoretical side.</p>



<ul class="wp-block-list">
<li><a href="https://esennesh.github.io/">Eli's website</a>.</li>



<li><a href="https://www.bastoslabvu.com/">Bastos lab</a>.</li>



<li>Twitter:&nbsp;<a href="https://x.com/EliSennesh">@EliSennesh</a></li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://arxiv.org/pdf/2408.05834">Divide-and-Conquer Predictive Coding: a Structured Bayesian Inference Algorithm</a>.</li>
</ul>
</li>



<li>Related episode:
<ul class="wp-block-list">
<li><a href="https://braininspired.co/podcast/201/">BI 201 Rajesh Rao: Active Predictive Coding</a>.</li>
</ul>
</li>
</ul>



<p>Read the <a href="https://www.thetransmitter.org/wp-content/uploads/2024/12/BI-202-transcript-proof.pdf" target="_blank" rel="noreferrer noopener">transcript</a>.</p>



<p>0:00 - Intro
3:59 - Eli's worldview
17:56 - NeuroAI is hard
24:38 - Prediction errors vs surprise
55:16 - Divide and conquer
1:13:24 - Challenges
1:18:44 - How to build AI
1:25:56 - Affect
1:31:55 - Abolish the value function</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. ]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em><a href="http://thetransmitter.org/">The Transmitter</a></em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for the <a href="https://www.thetransmitter.org/newsletters/">“Brain Inspired” email alerts</a> to be notified every time a new Brain Inspired episode is released<a href="https://www.thetransmitter.org/newsletters/">.</a></p>









<p>Eli Sennesh is a postdoc at Vanderbilt University, one of my old stomping grounds, currently in the lab of Andre Bastos. Andre’s lab focuses on understanding brain dynamics within cortical circuits, particularly how communication between brain areas is coordinated in perception, cognition, and behavior. So Eli is busy doing work along those lines, as you'll hear more about. But the original impetus for having him on his recently published proposal for how predictive coding might be implemented in brains. So in that sense, this episode builds on the last episode with Rajesh Rao, where we discussed Raj's "active predictive coding" account of predictive coding.&nbsp; As a super brief refresher, predictive coding is the proposal that the brain is constantly predicting what's about the happen, then stuff happens, and the brain uses the mismatch between its predictions and the actual stuff that's happening, to learn how to make better predictions moving forward. I refer you to the previous episode for more details. So Eli's account, along with his co-authors of course, which he calls "divide-and-conquer" predictive coding, uses a probabilistic approach in an attempt to account for how brains might implement predictive coding, and you'll learn more about that in our discussion. But we also talk quite a bit about the difference between practicing theoretical and experimental neuroscience, and Eli's experience moving into the experimental side from the theoretical side.</p>



<ul class="wp-block-list">
<li><a href="https://esennesh.github.io/">Eli's website</a>.</li>



<li><a href="https://www.bastoslabvu.com/">Bastos lab</a>.</li>



<li>Twitter:&nbsp;<a href="https://x.com/EliSennesh">@EliSennesh</a></li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://arxiv.org/pdf/2408.05834">Divide-and-Conquer Predictive Coding: a Structured Bayesian Inference Algorithm</a>.</li>
</ul>
</li>



<li>Related episode:
<ul class="wp-block-list">
<li><a href="https://braininspired.co/podcast/201/">BI 201 Rajesh Rao: Active Predictive Coding</a>.</li>
</ul>
</li>
</ul>



<p>Read the <a href="https://www.thetransmitter.org/wp-content/uploads/2024/12/BI-202-transcript-proof.pdf" target="_blank" rel="noreferrer noopener">transcript</a>.</p>



<p>0:00 - Intro
3:59 - Eli's worldview
17:56 - NeuroAI is hard
24:38 - Prediction errors vs surprise
55:16 - Divide and conquer
1:13:24 - Challenges
1:18:44 - How to build AI
1:25:56 - Affect
1:31:55 - Abolish the value function</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2411/202.mp3" length="95173640" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for the “Brain Inspired” email alerts to be notified every time a new Brain Inspired episode is released.









Eli Sennesh is a postdoc at Vanderbilt University, one of my old stomping grounds, currently in the lab of Andre Bastos. Andre’s lab focuses on understanding brain dynamics within cortical circuits, particularly how communication between brain areas is coordinated in perception, cognition, and behavior. So Eli is busy doing work along those lines, as you'll hear more about. But the original impetus for having him on his recently published proposal for how predictive coding might be implemented in brains. So in that sense, this episode builds on the last episode with Rajesh Rao, where we discussed Raj's "active predictive coding" account of predictive coding.&nbsp; As a super brief refresher, predictive coding is the proposal that the brain is constantly predicting what's about the happen, then stuff happens, and the brain uses the mismatch between its predictions and the actual stuff that's happening, to learn how to make better predictions moving forward. I refer you to the previous episode for more details. So Eli's account, along with his co-authors of course, which he calls "divide-and-conquer" predictive coding, uses a probabilistic approach in an attempt to account for how brains might implement predictive coding, and you'll learn more about that in our discussion. But we also talk quite a bit about the difference between practicing theoretical and experimental neuroscience, and Eli's experience moving into the experimental side from the theoretical side.




Eli's website.



Bastos lab.



Twitter:&nbsp;@EliSennesh



Related papers

Divide-and-Conquer Predictive Coding: a Structured Bayesian Inference Algorithm.





Related episode:

BI 201 Rajesh Rao: Active Predictive Coding.






Read the transcript.



0:00 - Intro
3:59 - Eli's worldview
17:56 - NeuroAI is hard
24:38 - Prediction errors vs surprise
55:16 - Divide and conquer
1:13:24 - Challenges
1:18:44 - How to build AI
1:25:56 - Affect
1:31:55 - Abolish the value function]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2025/01/website-thumb.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:38:11</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for the “Brain Inspired” email alerts to be notified every time a new Brain Inspired episode is released.









Eli Sennesh is a postdoc at Vanderbilt University, one of my old stomping grounds, currently in the lab of Andre Bastos. Andre’s lab focuses on understanding brain dynamics within cortical circuits, particularly how communication between brain areas is coordinated in perception, cognition, and behavior. So Eli is busy doing work along those lines, as you'll hear more about. But the original impetus for having him on his recently published proposal for how p]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2025/01/website-thumb.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 201 Rajesh Rao: From Predictive Coding to Brain Co-Processors</title>
	<link>https://braininspired.co/podcast/201/</link>
	<pubDate>Wed, 18 Dec 2024 05:00:00 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2404</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>Today I'm in conversation with Rajesh Rao, a distinguished professor of computer science and engineering at the University of Washington, where he also co-directs the Center for Neurotechnology. Back in 1999, Raj and Dana Ballard published what became <a href="https://pubmed.ncbi.nlm.nih.gov/10195184/">quite a famous paper</a>, which proposed how predictive coding might be implemented in brains. What is predictive coding, you may be wondering? It's roughly the idea that your brain is constantly predicting incoming sensory signals, and it generates that prediction as a top-down signal that meets the bottom-up sensory signals. Then the brain computes a difference between the prediction and the actual sensory input, and that difference is sent back up to the "top" where the brain then updates its internal model to make better future predictions.</p>



<p>So that was 25 years ago, and it was focused on how the brain handles sensory information. But Raj just recently published an update to the predictive coding framework, one that incorporates actions and perception, suggests how it might be implemented in the cortex - specifically which cortical layers do what - something he calls "Active predictive coding." So we discuss that new proposal, we also talk about his engineering work on brain-computer interface technologies, like BrainNet, which basically connects two brains together, and like neural co-processors, which use an artificial neural network as a prosthetic that can do things like enhance memories, optimize learning, and help restore brain function after strokes, for example. Finally, we discuss Raj's interest and work on deciphering an ancient Indian text, the mysterious Indus script.</p>



<ul class="wp-block-list">
<li><a href="https://www.rajeshpnrao.com/">Raj's website</a>.</li>



<li>Twitter: <a href="https://x.com/RajeshPNRao">@RajeshPNRao</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.nature.com/articles/s41593-024-01673-9">A sensory–motor theory of the neocortex</a>.</li>



<li><a href="https://arxiv.org/pdf/2012.03378">Brain co-processors: using AI to restore and augment brain function</a>.</li>



<li><a href="https://www.sciencedirect.com/science/article/pii/S0959438818301843">Towards neural co-processors for the brain: combining decoding and encoding in brain–computer interfaces</a>.</li>



<li><a href="https://www.nature.com/articles/s41598-019-41895-7">BrainNet: A Multi-Person Brain-to-Brain Interface for Direct Collaboration Between Brains</a>.</li>
</ul>
</li>
</ul>



<p>Read the&nbsp;<a href="https://www.thetransmitter.org/wp-content/uploads/2024/12/BI-201-transcript.pdf" target="_blank" rel="noreferrer noopener">transcript</a>.</p>



<p>0:00 - Intro
7:40 - Predictive coding origins
16:14 - Early appreciation of recurrence
17:08 - Prediction as a general theory of the brain
18:38 - Rao and Ballard 1999
26:32 - Prediction as a general theory of the brain
33:24 - Perception vs action
33:28 - Active predictive coding
45:04 - Evolving to augment our brains
53:03 - BrainNet
57:12 - Neural co-processors
1:11:19 - Decoding the Indus Script
1:20:18 - Transformer models relation to active predictive coding</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











Today Im in conversation with Rajesh Rao, a distinguished professor of computer science and engineering at the University of Washington, where he also co-dire]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>Today I'm in conversation with Rajesh Rao, a distinguished professor of computer science and engineering at the University of Washington, where he also co-directs the Center for Neurotechnology. Back in 1999, Raj and Dana Ballard published what became <a href="https://pubmed.ncbi.nlm.nih.gov/10195184/">quite a famous paper</a>, which proposed how predictive coding might be implemented in brains. What is predictive coding, you may be wondering? It's roughly the idea that your brain is constantly predicting incoming sensory signals, and it generates that prediction as a top-down signal that meets the bottom-up sensory signals. Then the brain computes a difference between the prediction and the actual sensory input, and that difference is sent back up to the "top" where the brain then updates its internal model to make better future predictions.</p>



<p>So that was 25 years ago, and it was focused on how the brain handles sensory information. But Raj just recently published an update to the predictive coding framework, one that incorporates actions and perception, suggests how it might be implemented in the cortex - specifically which cortical layers do what - something he calls "Active predictive coding." So we discuss that new proposal, we also talk about his engineering work on brain-computer interface technologies, like BrainNet, which basically connects two brains together, and like neural co-processors, which use an artificial neural network as a prosthetic that can do things like enhance memories, optimize learning, and help restore brain function after strokes, for example. Finally, we discuss Raj's interest and work on deciphering an ancient Indian text, the mysterious Indus script.</p>



<ul class="wp-block-list">
<li><a href="https://www.rajeshpnrao.com/">Raj's website</a>.</li>



<li>Twitter: <a href="https://x.com/RajeshPNRao">@RajeshPNRao</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.nature.com/articles/s41593-024-01673-9">A sensory–motor theory of the neocortex</a>.</li>



<li><a href="https://arxiv.org/pdf/2012.03378">Brain co-processors: using AI to restore and augment brain function</a>.</li>



<li><a href="https://www.sciencedirect.com/science/article/pii/S0959438818301843">Towards neural co-processors for the brain: combining decoding and encoding in brain–computer interfaces</a>.</li>



<li><a href="https://www.nature.com/articles/s41598-019-41895-7">BrainNet: A Multi-Person Brain-to-Brain Interface for Direct Collaboration Between Brains</a>.</li>
</ul>
</li>
</ul>



<p>Read the&nbsp;<a href="https://www.thetransmitter.org/wp-content/uploads/2024/12/BI-201-transcript.pdf" target="_blank" rel="noreferrer noopener">transcript</a>.</p>



<p>0:00 - Intro
7:40 - Predictive coding origins
16:14 - Early appreciation of recurrence
17:08 - Prediction as a general theory of the brain
18:38 - Rao and Ballard 1999
26:32 - Prediction as a general theory of the brain
33:24 - Perception vs action
33:28 - Active predictive coding
45:04 - Evolving to augment our brains
53:03 - BrainNet
57:12 - Neural co-processors
1:11:19 - Decoding the Indus Script
1:20:18 - Transformer models relation to active predictive coding</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2404/201.mp3" length="94607178" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











Today I'm in conversation with Rajesh Rao, a distinguished professor of computer science and engineering at the University of Washington, where he also co-directs the Center for Neurotechnology. Back in 1999, Raj and Dana Ballard published what became quite a famous paper, which proposed how predictive coding might be implemented in brains. What is predictive coding, you may be wondering? It's roughly the idea that your brain is constantly predicting incoming sensory signals, and it generates that prediction as a top-down signal that meets the bottom-up sensory signals. Then the brain computes a difference between the prediction and the actual sensory input, and that difference is sent back up to the "top" where the brain then updates its internal model to make better future predictions.



So that was 25 years ago, and it was focused on how the brain handles sensory information. But Raj just recently published an update to the predictive coding framework, one that incorporates actions and perception, suggests how it might be implemented in the cortex - specifically which cortical layers do what - something he calls "Active predictive coding." So we discuss that new proposal, we also talk about his engineering work on brain-computer interface technologies, like BrainNet, which basically connects two brains together, and like neural co-processors, which use an artificial neural network as a prosthetic that can do things like enhance memories, optimize learning, and help restore brain function after strokes, for example. Finally, we discuss Raj's interest and work on deciphering an ancient Indian text, the mysterious Indus script.




Raj's website.



Twitter: @RajeshPNRao.



Related papers

A sensory–motor theory of the neocortex.



Brain co-processors: using AI to restore and augment brain function.



Towards neural co-processors for the brain: combining decoding and encoding in brain–computer interfaces.



BrainNet: A Multi-Person Brain-to-Brain Interface for Direct Collaboration Between Brains.






Read the&nbsp;transcript.



0:00 - Intro
7:40 - Predictive coding origins
16:14 - Early appreciation of recurrence
17:08 - Prediction as a general theory of the brain
18:38 - Rao and Ballard 1999
26:32 - Prediction as a general theory of the brain
33:24 - Perception vs action
33:28 - Active predictive coding
45:04 - Evolving to augment our brains
53:03 - BrainNet
57:12 - Neural co-processors
1:11:19 - Decoding the Indus Script
1:20:18 - Transformer models relation to active predictive coding]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2024/12/thumb-2-website.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:37:22</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











Today I'm in conversation with Rajesh Rao, a distinguished professor of computer science and engineering at the University of Washington, where he also co-directs the Center for Neurotechnology. Back in 1999, Raj and Dana Ballard published what became quite a famous paper, which proposed how predictive coding might be implemented in brains. What is predictive coding, you may be wondering? It's roughly the idea that your brain is constantly predicting incoming sensory signals, and it generates that prediction as a top-down signal that meets the bottom-up sensory signals. Then the brain computes a difference between the prediction and the actual sensory input, and that difference is sent back up to the "top" where the brain then updates its internal model to make better future predictions.



So that was 25 years ago, and it was focused on how the brain handles sensory information. But Raj ju]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2024/12/thumb-2-website.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 200 Grace Hwang and Joe Monaco: The Future of NeuroAI</title>
	<link>https://braininspired.co/podcast/200/</link>
	<pubDate>Wed, 04 Dec 2024 05:00:00 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2398</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>Joe Monaco and Grace Hwang&nbsp; co-organized a recent workshop I participated in, the <a href="https://n4solutionsllc.com/brainneuroai/">2024 BRAIN NeuroAI Workshop</a>. You may have heard of the BRAIN Initiative, but in case not, BRAIN is is huge funding effort across many agencies, one of which is the National Institutes of Health, where this recent workshop was held. The BRAIN Initiative began in 2013 under the Obama administration, with the goal to support developing technologies to help understand the human brain, so we can cure brain based diseases.</p>



<p>BRAIN Initiative just became a decade old, with many successes like recent whole brain connectomes, and discovering the vast array of cell types. Now the question is how to move forward, and one area they are curious about, that perhaps has a lot of potential to support their mission, is the recent convergence of neuroscience and AI... or NeuroAI. The workshop was designed to explore how NeuroAI might contribute moving forward, and to hear from NeuroAI folks how they envision the field moving forward. You'll hear more about that in a moment.</p>





<p>That's one reason I invited Grace and Joe on. Another reason is because they co-wrote a position paper a while back that is impressive as a synthesis of lots of cognitive sciences concepts, but also proposes a specific level of abstraction and scale in brain processes that may serve as a base layer for computation. The paper is called Neurodynamical Computing at the Information Boundaries, of Intelligent Systems, and you'll learn more about that in this episode.</p>



<ul class="wp-block-list">
<li><a href="https://www.ninds.nih.gov/about-ninds/who-we-are/staff-directory/joseph-monaco">Joe's NIH page</a>.</li>



<li><a href="https://www.ninds.nih.gov/about-ninds/who-we-are/staff-directory/grace-m-hwang">Grace's NIH page</a>.</li>



<li>Twitter:&nbsp;
<ul class="wp-block-list">
<li>Joe: <a href="https://x.com/j_d_monaco">@j_d_monaco</a></li>
</ul>
</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://link.springer.com/article/10.1007/s12559-022-10081-9">Neurodynamical Computing at the Information Boundaries of Intelligent Systems</a>.</li>



<li><a href="https://link.springer.com/article/10.1007/s00422-020-00823-z">Cognitive swarming in complex environments with attractor dynamics and oscillatory computing</a>.</li>



<li><a href="https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1006741">Spatial synchronization codes from coupled rate-phase neurons</a>.</li>



<li><a href="https://www.nature.com/articles/s41467-017-01190-3">Oscillators that sync and swarm</a>.</li>
</ul>
</li>



<li>Mentioned
<ul class="wp-block-list">
<li><a href="http://dx.doi.org/10.1016/j.bica.2016.11.002">A historical survey of algorithms and hardware architectures for neural-inspired and neuromorphic computing applications</a>.</li>



<li><a href="http://dx.doi.org/10.1002/hipo.23027">Recalling Lashley and reconsolidating Hebb</a>.</li>



<li>BRAIN NeuroAI Workshop (Nov 12–13)
<ul class="wp-block-list">
<li><a href="https://n4solutionsllc.com/wp-content/uploads/2024/11/NIH_BRAIN_NeuroAI_Workshop_Program_Book_508c.pdf">NIH BRAIN NeuroAI Workshop Program Book</a></li>



<li><a href="https://videocast.nih.gov/watch=55160">NIH VideoCast – Day 1 Recording –&nbsp;BRAIN NeuroAI Workshop</a></li>



<li><a href="https://videocast.nih.gov/watch=55262">NIH VideoCast – Day 2 Recording – BRAIN NeuroAI Workshop</a></li>
</ul>
</li>



<li>Neuromorphic Principles in Biomedicine and Healthcare Workshop (Oct 21–22)
<ul class="wp-block-list">
<li><a href="https://2024.neuro-med.org/">NPBH 2024</a></li>
</ul>
</li>



<li>BRAIN Investigators Meeting 2020 Symposium &amp; Perspective Paper
<ul class="wp-block-list">
<li><a href="https://www.youtube.com/watch?v=2jy1ENYHRAw">BRAIN 2020 Symposium on Dynamical Systems Neuroscience and Machine Learning</a> (YouTube)</li>



<li><a href="https://link.springer.com/article/10.1007/s12559-022-10081-9">Neurodynamical Computing at the Information Boundaries of Intelligent Systems | Cognitive Computation</a></li>
</ul>
</li>



<li>NSF/CIRC
<ul class="wp-block-list">
<li><a href="https://new.nsf.gov/funding/opportunities/circ-community-infrastructure-research-computer-information">Community Infrastructure for Research in Computer and Information Science and Engineering (CIRC) | NSF - National Science Foundation</a></li>



<li><a href="https://ai.utsa.edu/thor/">THOR Neuromorphic Commons - Matrix: The UTSA AI Consortium for Human Well-Being</a></li>
</ul>
</li>
</ul>
</li>
</ul>



<p>Read the <a href="https://www.thetransmitter.org/wp-content/uploads/2024/12/BI-200-transcript-final.pdf" target="_blank" rel="noreferrer noopener">transcript</a>.</p>



<p>0:00 - Intro
25:45 - NeuroAI Workshop - neuromorphics
33:31 - Neuromorphics and theory
49:19 - Reflections on the workshop
54:22 - Neurodynamical computing and information boundaries
1:01:04 - Perceptual control theory
1:08:56 - Digital twins and neural foundation models
1:14:02 - Base layer of computation</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











Joe Monaco and Grace Hwang&nbsp; co-organized a recent workshop I participated in, the 2024 BRAIN NeuroAI Workshop. You may have heard of the BRAIN Initiative]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>Joe Monaco and Grace Hwang&nbsp; co-organized a recent workshop I participated in, the <a href="https://n4solutionsllc.com/brainneuroai/">2024 BRAIN NeuroAI Workshop</a>. You may have heard of the BRAIN Initiative, but in case not, BRAIN is is huge funding effort across many agencies, one of which is the National Institutes of Health, where this recent workshop was held. The BRAIN Initiative began in 2013 under the Obama administration, with the goal to support developing technologies to help understand the human brain, so we can cure brain based diseases.</p>



<p>BRAIN Initiative just became a decade old, with many successes like recent whole brain connectomes, and discovering the vast array of cell types. Now the question is how to move forward, and one area they are curious about, that perhaps has a lot of potential to support their mission, is the recent convergence of neuroscience and AI... or NeuroAI. The workshop was designed to explore how NeuroAI might contribute moving forward, and to hear from NeuroAI folks how they envision the field moving forward. You'll hear more about that in a moment.</p>





<p>That's one reason I invited Grace and Joe on. Another reason is because they co-wrote a position paper a while back that is impressive as a synthesis of lots of cognitive sciences concepts, but also proposes a specific level of abstraction and scale in brain processes that may serve as a base layer for computation. The paper is called Neurodynamical Computing at the Information Boundaries, of Intelligent Systems, and you'll learn more about that in this episode.</p>



<ul class="wp-block-list">
<li><a href="https://www.ninds.nih.gov/about-ninds/who-we-are/staff-directory/joseph-monaco">Joe's NIH page</a>.</li>



<li><a href="https://www.ninds.nih.gov/about-ninds/who-we-are/staff-directory/grace-m-hwang">Grace's NIH page</a>.</li>



<li>Twitter:&nbsp;
<ul class="wp-block-list">
<li>Joe: <a href="https://x.com/j_d_monaco">@j_d_monaco</a></li>
</ul>
</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://link.springer.com/article/10.1007/s12559-022-10081-9">Neurodynamical Computing at the Information Boundaries of Intelligent Systems</a>.</li>



<li><a href="https://link.springer.com/article/10.1007/s00422-020-00823-z">Cognitive swarming in complex environments with attractor dynamics and oscillatory computing</a>.</li>



<li><a href="https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1006741">Spatial synchronization codes from coupled rate-phase neurons</a>.</li>



<li><a href="https://www.nature.com/articles/s41467-017-01190-3">Oscillators that sync and swarm</a>.</li>
</ul>
</li>



<li>Mentioned
<ul class="wp-block-list">
<li><a href="http://dx.doi.org/10.1016/j.bica.2016.11.002">A historical survey of algorithms and hardware architectures for neural-inspired and neuromorphic computing applications</a>.</li>



<li><a href="http://dx.doi.org/10.1002/hipo.23027">Recalling Lashley and reconsolidating Hebb</a>.</li>



<li>BRAIN NeuroAI Workshop (Nov 12–13)
<ul class="wp-block-list">
<li><a href="https://n4solutionsllc.com/wp-content/uploads/2024/11/NIH_BRAIN_NeuroAI_Workshop_Program_Book_508c.pdf">NIH BRAIN NeuroAI Workshop Program Book</a></li>



<li><a href="https://videocast.nih.gov/watch=55160">NIH VideoCast – Day 1 Recording –&nbsp;BRAIN NeuroAI Workshop</a></li>



<li><a href="https://videocast.nih.gov/watch=55262">NIH VideoCast – Day 2 Recording – BRAIN NeuroAI Workshop</a></li>
</ul>
</li>



<li>Neuromorphic Principles in Biomedicine and Healthcare Workshop (Oct 21–22)
<ul class="wp-block-list">
<li><a href="https://2024.neuro-med.org/">NPBH 2024</a></li>
</ul>
</li>



<li>BRAIN Investigators Meeting 2020 Symposium &amp; Perspective Paper
<ul class="wp-block-list">
<li><a href="https://www.youtube.com/watch?v=2jy1ENYHRAw">BRAIN 2020 Symposium on Dynamical Systems Neuroscience and Machine Learning</a> (YouTube)</li>



<li><a href="https://link.springer.com/article/10.1007/s12559-022-10081-9">Neurodynamical Computing at the Information Boundaries of Intelligent Systems | Cognitive Computation</a></li>
</ul>
</li>



<li>NSF/CIRC
<ul class="wp-block-list">
<li><a href="https://new.nsf.gov/funding/opportunities/circ-community-infrastructure-research-computer-information">Community Infrastructure for Research in Computer and Information Science and Engineering (CIRC) | NSF - National Science Foundation</a></li>



<li><a href="https://ai.utsa.edu/thor/">THOR Neuromorphic Commons - Matrix: The UTSA AI Consortium for Human Well-Being</a></li>
</ul>
</li>
</ul>
</li>
</ul>



<p>Read the <a href="https://www.thetransmitter.org/wp-content/uploads/2024/12/BI-200-transcript-final.pdf" target="_blank" rel="noreferrer noopener">transcript</a>.</p>



<p>0:00 - Intro
25:45 - NeuroAI Workshop - neuromorphics
33:31 - Neuromorphics and theory
49:19 - Reflections on the workshop
54:22 - Neurodynamical computing and information boundaries
1:01:04 - Perceptual control theory
1:08:56 - Digital twins and neural foundation models
1:14:02 - Base layer of computation</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2398/200.mp3" length="94071308" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











Joe Monaco and Grace Hwang&nbsp; co-organized a recent workshop I participated in, the 2024 BRAIN NeuroAI Workshop. You may have heard of the BRAIN Initiative, but in case not, BRAIN is is huge funding effort across many agencies, one of which is the National Institutes of Health, where this recent workshop was held. The BRAIN Initiative began in 2013 under the Obama administration, with the goal to support developing technologies to help understand the human brain, so we can cure brain based diseases.



BRAIN Initiative just became a decade old, with many successes like recent whole brain connectomes, and discovering the vast array of cell types. Now the question is how to move forward, and one area they are curious about, that perhaps has a lot of potential to support their mission, is the recent convergence of neuroscience and AI... or NeuroAI. The workshop was designed to explore how NeuroAI might contribute moving forward, and to hear from NeuroAI folks how they envision the field moving forward. You'll hear more about that in a moment.





That's one reason I invited Grace and Joe on. Another reason is because they co-wrote a position paper a while back that is impressive as a synthesis of lots of cognitive sciences concepts, but also proposes a specific level of abstraction and scale in brain processes that may serve as a base layer for computation. The paper is called Neurodynamical Computing at the Information Boundaries, of Intelligent Systems, and you'll learn more about that in this episode.




Joe's NIH page.



Grace's NIH page.



Twitter:&nbsp;

Joe: @j_d_monaco





Related papers

Neurodynamical Computing at the Information Boundaries of Intelligent Systems.



Cognitive swarming in complex environments with attractor dynamics and oscillatory computing.



Spatial synchronization codes from coupled rate-phase neurons.



Oscillators that sync and swarm.





Mentioned

A historical survey of algorithms and hardware architectures for neural-inspired and neuromorphic computing applications.



Recalling Lashley and reconsolidating Hebb.



BRAIN NeuroAI Workshop (Nov 12–13)

NIH BRAIN NeuroAI Workshop Program Book



NIH VideoCast – Day 1 Recording –&nbsp;BRAIN NeuroAI Workshop



NIH VideoCast – Day 2 Recording – BRAIN NeuroAI Workshop





Neuromorphic Principles in Biomedicine and Healthcare Workshop (Oct 21–22)

NPBH 2024





BRAIN Investigators Meeting 2020 Symposium &amp; Perspective Paper

BRAIN 2020 Symposium on Dynamical Systems Neuroscience and Machine Learning (YouTube)



Neurodynamical Computing at the Information Boundaries of Intelligent Systems | Cognitive Computation





NSF/CIRC

Community Infrastructure for Research in Computer and Information Science and Engineering (CIRC) | NSF - National Science Foundation



THOR Neuromorphic Commons - Matrix: The UTSA AI Consortium for Human Well-Being








Read the transcript.



0:00 - Intro
25:45 - NeuroAI Workshop - neuromorphics
33:31 - Neuromorphics and theory
49:19 - Reflections on the workshop
54:22 - Neurodynamical computing and information boundaries
1:01:04 - Perceptual control theory
1:08:56 - Digital twins and neural foundation models
1:14:02 - Base layer of computation]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2024/12/thumb-2-web.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:37:11</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











Joe Monaco and Grace Hwang&nbsp; co-organized a recent workshop I participated in, the 2024 BRAIN NeuroAI Workshop. You may have heard of the BRAIN Initiative, but in case not, BRAIN is is huge funding effort across many agencies, one of which is the National Institutes of Health, where this recent workshop was held. The BRAIN Initiative began in 2013 under the Obama administration, with the goal to support developing technologies to help understand the human brain, so we can cure brain based diseases.



BRAIN Initiative just became a decade old, with many successes like recent whole brain connectomes, and discovering the vast array of cell types. Now the question is how to move forward, and one area they are curious about, that perhaps has a lot of potential to support their mission, is the recent convergence of neuroscience and AI... or NeuroAI. The workshop was designed to explore how N]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2024/12/thumb-2-web.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 199 Hessam Akhlaghpour: Natural Universal Computation</title>
	<link>https://braininspired.co/podcast/199/</link>
	<pubDate>Tue, 26 Nov 2024 05:00:00 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2393</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p><em>The Transmitter </em>is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released: <a href="https://www.thetransmitter.org/newsletters/">https://www.thetransmitter.org/newsletters/</a></p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p>Hessam Akhlaghpour is a postdoctoral researcher at Rockefeller University in the Maimon lab. His experimental work is in fly neuroscience mostly studying spatial memories in fruit flies. However, we are going to be talking about a different (although somewhat related) side of his postdoctoral research. This aspect of his work involves theoretical explorations of molecular computation, which are deeply inspired by Randy Gallistel and Adam King's book Memory and the Computational Brain. Randy has been on the podcast before to discuss his ideas that memory needs to be stored in something more stable than the synapses between neurons, and how that something could be genetic material like RNA. When Hessam read this book, he was re-inspired to think of the brain the way he used to think of it before experimental neuroscience challenged his views. It re-inspired him to think of the brain as a computational system. But it also led to what we discuss today, the idea that RNA has the capacity for universal computation, and Hessam's development of how that might happen. So we discuss that background and story, why universal computation has been discovered in organisms yet since surely evolution has stumbled upon it, and how RNA might and combinatory logic could implement universal computation in nature.</p>



<ul class="wp-block-list">
<li><a href="https://www.akhlaghpour.info/">Hessam's website</a>.</li>



<li><a href="https://maimonlab.rockefeller.edu/">Maimon Lab</a>.</li>



<li>Twitter:&nbsp;<a href="https://x.com/theHessam">@theHessam</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.sciencedirect.com/science/article/pii/S0022519321004045">An RNA-based theory of natural universal computation</a>.</li>



<li><a href="https://arxiv.org/abs/2209.04923">The molecular memory code and synaptic plasticity: a synthesis</a>.</li>



<li><a href="https://doi.org/10.1126/science.adf3481">Lifelong persistence of nuclear RNAs in the mouse brain</a>.</li>



<li><a href="https://sites.santafe.edu/~moore/pubs/FlowsMaps.pdf">Cris Moore's conjecture #5 in this 1998 paper</a>.</li>



<li>(The Gallistel book): <a href="https://onlinelibrary.wiley.com/doi/book/10.1002/9781444310498">Memory and the Computational Brain: Why Cognitive Science Will Transform Neuroscience</a>.</li>
</ul>
</li>



<li>Related episodes
<ul class="wp-block-list">
<li><a href="https://braininspired.co/podcast/126/">BI 126 Randy Gallistel: Where Is the Engram?</a></li>



<li><a href="https://braininspired.co/podcast/172/">BI 172 David Glanzman: Memory All The Way Down</a></li>
</ul>
</li>
</ul>



<p>Read the&nbsp;<a href="https://www.thetransmitter.org/wp-content/uploads/2024/11/BI-199-transcript.pdf" target="_blank" rel="noreferrer noopener">transcript</a>. </p>



<p>0:00 - Intro
4:44 - Hessam's background
11:50 - Randy Gallistel's book
14:43 - Information in the brain
17:51 - Hessam's turn to universal computation
35:30 - AI and universal computation
40:09 - Universal computation to solve intelligence
44:22 - Connecting sub and super molecular
50:10 - Junk DNA
56:42 - Genetic material for coding
1:06:37 - RNA and combinatory logic
1:35:14 - Outlook
1:42:11 - Reflecting on the molecular world</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p><em>The Transmitter </em>is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released: <a href="https://www.thetransmitter.org/newsletters/">https://www.thetransmitter.org/newsletters/</a></p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p>Hessam Akhlaghpour is a postdoctoral researcher at Rockefeller University in the Maimon lab. His experimental work is in fly neuroscience mostly studying spatial memories in fruit flies. However, we are going to be talking about a different (although somewhat related) side of his postdoctoral research. This aspect of his work involves theoretical explorations of molecular computation, which are deeply inspired by Randy Gallistel and Adam King's book Memory and the Computational Brain. Randy has been on the podcast before to discuss his ideas that memory needs to be stored in something more stable than the synapses between neurons, and how that something could be genetic material like RNA. When Hessam read this book, he was re-inspired to think of the brain the way he used to think of it before experimental neuroscience challenged his views. It re-inspired him to think of the brain as a computational system. But it also led to what we discuss today, the idea that RNA has the capacity for universal computation, and Hessam's development of how that might happen. So we discuss that background and story, why universal computation has been discovered in organisms yet since surely evolution has stumbled upon it, and how RNA might and combinatory logic could implement universal computation in nature.</p>



<ul class="wp-block-list">
<li><a href="https://www.akhlaghpour.info/">Hessam's website</a>.</li>



<li><a href="https://maimonlab.rockefeller.edu/">Maimon Lab</a>.</li>



<li>Twitter:&nbsp;<a href="https://x.com/theHessam">@theHessam</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.sciencedirect.com/science/article/pii/S0022519321004045">An RNA-based theory of natural universal computation</a>.</li>



<li><a href="https://arxiv.org/abs/2209.04923">The molecular memory code and synaptic plasticity: a synthesis</a>.</li>



<li><a href="https://doi.org/10.1126/science.adf3481">Lifelong persistence of nuclear RNAs in the mouse brain</a>.</li>



<li><a href="https://sites.santafe.edu/~moore/pubs/FlowsMaps.pdf">Cris Moore's conjecture #5 in this 1998 paper</a>.</li>



<li>(The Gallistel book): <a href="https://onlinelibrary.wiley.com/doi/book/10.1002/9781444310498">Memory and the Computational Brain: Why Cognitive Science Will Transform Neuroscience</a>.</li>
</ul>
</li>



<li>Related episodes
<ul class="wp-block-list">
<li><a href="https://braininspired.co/podcast/126/">BI 126 Randy Gallistel: Where Is the Engram?</a></li>



<li><a href="https://braininspired.co/podcast/172/">BI 172 David Glanzman: Memory All The Way Down</a></li>
</ul>
</li>
</ul>



<p>Read the&nbsp;<a href="https://www.thetransmitter.org/wp-content/uploads/2024/11/BI-199-transcript.pdf" target="_blank" rel="noreferrer noopener">transcript</a>. </p>



<p>0:00 - Intro
4:44 - Hessam's background
11:50 - Randy Gallistel's book
14:43 - Information in the brain
17:51 - Hessam's turn to universal computation
35:30 - AI and universal computation
40:09 - Universal computation to solve intelligence
44:22 - Connecting sub and super molecular
50:10 - Junk DNA
56:42 - Genetic material for coding
1:06:37 - RNA and combinatory logic
1:35:14 - Outlook
1:42:11 - Reflecting on the molecular world</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2393/199.mp3" length="105925571" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released: https://www.thetransmitter.org/newsletters/



To explore more neuroscience news and perspectives, visit thetransmitter.org.



Hessam Akhlaghpour is a postdoctoral researcher at Rockefeller University in the Maimon lab. His experimental work is in fly neuroscience mostly studying spatial memories in fruit flies. However, we are going to be talking about a different (although somewhat related) side of his postdoctoral research. This aspect of his work involves theoretical explorations of molecular computation, which are deeply inspired by Randy Gallistel and Adam King's book Memory and the Computational Brain. Randy has been on the podcast before to discuss his ideas that memory needs to be stored in something more stable than the synapses between neurons, and how that something could be genetic material like RNA. When Hessam read this book, he was re-inspired to think of the brain the way he used to think of it before experimental neuroscience challenged his views. It re-inspired him to think of the brain as a computational system. But it also led to what we discuss today, the idea that RNA has the capacity for universal computation, and Hessam's development of how that might happen. So we discuss that background and story, why universal computation has been discovered in organisms yet since surely evolution has stumbled upon it, and how RNA might and combinatory logic could implement universal computation in nature.




Hessam's website.



Maimon Lab.



Twitter:&nbsp;@theHessam.



Related papers

An RNA-based theory of natural universal computation.



The molecular memory code and synaptic plasticity: a synthesis.



Lifelong persistence of nuclear RNAs in the mouse brain.



Cris Moore's conjecture #5 in this 1998 paper.



(The Gallistel book): Memory and the Computational Brain: Why Cognitive Science Will Transform Neuroscience.





Related episodes

BI 126 Randy Gallistel: Where Is the Engram?



BI 172 David Glanzman: Memory All The Way Down






Read the&nbsp;transcript. 



0:00 - Intro
4:44 - Hessam's background
11:50 - Randy Gallistel's book
14:43 - Information in the brain
17:51 - Hessam's turn to universal computation
35:30 - AI and universal computation
40:09 - Universal computation to solve intelligence
44:22 - Connecting sub and super molecular
50:10 - Junk DNA
56:42 - Genetic material for coding
1:06:37 - RNA and combinatory logic
1:35:14 - Outlook
1:42:11 - Reflecting on the molecular world]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2024/11/thumb-website.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:49:07</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.



Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released: https://www.thetransmitter.org/newsletters/



To explore more neuroscience news and perspectives, visit thetransmitter.org.



Hessam Akhlaghpour is a postdoctoral researcher at Rockefeller University in the Maimon lab. His experimental work is in fly neuroscience mostly studying spatial memories in fruit flies. However, we are going to be talking about a different (although somewhat related) side of his postdoctoral research. This aspect of his work involves t]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2024/11/thumb-website.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 198 Tony Zador: Neuroscience Principles to Improve AI</title>
	<link>https://braininspired.co/podcast/198/</link>
	<pubDate>Mon, 11 Nov 2024 15:28:00 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2389</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>





<p>Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released: <a href="https://www.thetransmitter.org/newsletters/">https://www.thetransmitter.org/newsletters/</a></p>



<p>To explore more neuroscience news and perspectives, visit thetransmitter.org.</p>



<p>Tony Zador runs the Zador lab at Cold Spring Harbor Laboratory. You've heard him on Brain Inspired a few times in the past, most recently in a panel discussion I moderated at this past COSYNE conference - a conference Tony co-founded 20 years ago. As you'll hear, Tony's current and past interests and research endeavors are of a wide variety, but today we focus mostly on his thoughts on NeuroAI.</p>



<p>We're in a huge AI hype cycle right now, for good reason, and there's a lot of talk in the neuroscience world about whether neuroscience has anything of value to provide AI engineers - and how much value, if any, neuroscience has provided in the past.</p>



<p>Tony is team neuroscience. You'll hear him discuss why in this episode, especially when it comes to ways in which development and evolution might inspire better data efficiency, looking to animals in general to understand how they coordinate numerous objective functions to achieve their intelligent behaviors - something Tony calls alignment - and using spikes in AI models to increase energy efficiency.</p>



<ul class="wp-block-list">
<li><a href="https://zadorlab.labsites.cshl.edu/">Zador Lab</a></li>



<li>Twitter:&nbsp;<a href="https://twitter.com/TonyZador">@TonyZador</a></li>



<li>Previous episodes:
<ul class="wp-block-list">
<li><a href="https://braininspired.co/podcast/187/">BI 187: COSYNE 2024 Neuro-AI Panel</a>.</li>



<li><a href="https://braininspired.co/podcast/125/">BI 125 Doris Tsao, Tony Zador, Blake Richards: NAISys</a></li>



<li><a href="https://braininspired.co/podcast/34/">BI 034 Tony Zador: How DNA and Evolution Can Inform AI</a></li>
</ul>
</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://zadorlab.labsites.cshl.edu/wp-content/uploads/sites/59/2024/08/Catalyzing-Next-Generation-Artificial-Intelligence-through-NeuroAI.pdf">Catalyzing next-generation Artificial Intelligence through NeuroAI</a>.</li>



<li><a href="https://zadorlab.labsites.cshl.edu/wp-content/uploads/sites/59/2024/09/shuvaev-et-al-2024-Encoding-innate-ability-through-a-genomic-bottleneck-.pdf">Encoding innate ability through a genomic bottleneck</a>.</li>
</ul>
</li>



<li>Essays
<ul class="wp-block-list">
<li><a href="https://www.thetransmitter.org/neuroai/neuroai-a-field-born-from-the-symbiosis-between-neuroscience-ai/">NeuroAI: A field born from the symbiosis between neuroscience, AI</a>.</li>



<li><a href="https://www.thetransmitter.org/neuroai/what-the-brain-can-teach-artificial-neural-networks/">What the brain can teach artificial neural networks</a>.</li>
</ul>
</li>
</ul>



<p>Read the <a href="https://www.thetransmitter.org/wp-content/uploads/2024/11/BI-198-transcript-final-1.pdf" target="_blank" rel="noreferrer noopener">transcript</a>.</p>



<p>0:00 - Intro
3:28 - "Neuro-AI"
12:48 - Visual cognition history
18:24 - Information theory in neuroscience
20:47 - Necessary steps for progress
24:34 - Neuro-AI models and cognition
35:47 - Animals for inspiring AI
41:48 - What we want AI to do
46:01 - Development and AI
59:03 - Robots
1:25:10 - Catalyzing the next generation of AI</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. ]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>





<p>Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released: <a href="https://www.thetransmitter.org/newsletters/">https://www.thetransmitter.org/newsletters/</a></p>



<p>To explore more neuroscience news and perspectives, visit thetransmitter.org.</p>



<p>Tony Zador runs the Zador lab at Cold Spring Harbor Laboratory. You've heard him on Brain Inspired a few times in the past, most recently in a panel discussion I moderated at this past COSYNE conference - a conference Tony co-founded 20 years ago. As you'll hear, Tony's current and past interests and research endeavors are of a wide variety, but today we focus mostly on his thoughts on NeuroAI.</p>



<p>We're in a huge AI hype cycle right now, for good reason, and there's a lot of talk in the neuroscience world about whether neuroscience has anything of value to provide AI engineers - and how much value, if any, neuroscience has provided in the past.</p>



<p>Tony is team neuroscience. You'll hear him discuss why in this episode, especially when it comes to ways in which development and evolution might inspire better data efficiency, looking to animals in general to understand how they coordinate numerous objective functions to achieve their intelligent behaviors - something Tony calls alignment - and using spikes in AI models to increase energy efficiency.</p>



<ul class="wp-block-list">
<li><a href="https://zadorlab.labsites.cshl.edu/">Zador Lab</a></li>



<li>Twitter:&nbsp;<a href="https://twitter.com/TonyZador">@TonyZador</a></li>



<li>Previous episodes:
<ul class="wp-block-list">
<li><a href="https://braininspired.co/podcast/187/">BI 187: COSYNE 2024 Neuro-AI Panel</a>.</li>



<li><a href="https://braininspired.co/podcast/125/">BI 125 Doris Tsao, Tony Zador, Blake Richards: NAISys</a></li>



<li><a href="https://braininspired.co/podcast/34/">BI 034 Tony Zador: How DNA and Evolution Can Inform AI</a></li>
</ul>
</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://zadorlab.labsites.cshl.edu/wp-content/uploads/sites/59/2024/08/Catalyzing-Next-Generation-Artificial-Intelligence-through-NeuroAI.pdf">Catalyzing next-generation Artificial Intelligence through NeuroAI</a>.</li>



<li><a href="https://zadorlab.labsites.cshl.edu/wp-content/uploads/sites/59/2024/09/shuvaev-et-al-2024-Encoding-innate-ability-through-a-genomic-bottleneck-.pdf">Encoding innate ability through a genomic bottleneck</a>.</li>
</ul>
</li>



<li>Essays
<ul class="wp-block-list">
<li><a href="https://www.thetransmitter.org/neuroai/neuroai-a-field-born-from-the-symbiosis-between-neuroscience-ai/">NeuroAI: A field born from the symbiosis between neuroscience, AI</a>.</li>



<li><a href="https://www.thetransmitter.org/neuroai/what-the-brain-can-teach-artificial-neural-networks/">What the brain can teach artificial neural networks</a>.</li>
</ul>
</li>
</ul>



<p>Read the <a href="https://www.thetransmitter.org/wp-content/uploads/2024/11/BI-198-transcript-final-1.pdf" target="_blank" rel="noreferrer noopener">transcript</a>.</p>



<p>0:00 - Intro
3:28 - "Neuro-AI"
12:48 - Visual cognition history
18:24 - Information theory in neuroscience
20:47 - Necessary steps for progress
24:34 - Neuro-AI models and cognition
35:47 - Animals for inspiring AI
41:48 - What we want AI to do
46:01 - Development and AI
59:03 - Robots
1:25:10 - Catalyzing the next generation of AI</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2389/198.mp3" length="92314860" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.





Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released: https://www.thetransmitter.org/newsletters/



To explore more neuroscience news and perspectives, visit thetransmitter.org.



Tony Zador runs the Zador lab at Cold Spring Harbor Laboratory. You've heard him on Brain Inspired a few times in the past, most recently in a panel discussion I moderated at this past COSYNE conference - a conference Tony co-founded 20 years ago. As you'll hear, Tony's current and past interests and research endeavors are of a wide variety, but today we focus mostly on his thoughts on NeuroAI.



We're in a huge AI hype cycle right now, for good reason, and there's a lot of talk in the neuroscience world about whether neuroscience has anything of value to provide AI engineers - and how much value, if any, neuroscience has provided in the past.



Tony is team neuroscience. You'll hear him discuss why in this episode, especially when it comes to ways in which development and evolution might inspire better data efficiency, looking to animals in general to understand how they coordinate numerous objective functions to achieve their intelligent behaviors - something Tony calls alignment - and using spikes in AI models to increase energy efficiency.




Zador Lab



Twitter:&nbsp;@TonyZador



Previous episodes:

BI 187: COSYNE 2024 Neuro-AI Panel.



BI 125 Doris Tsao, Tony Zador, Blake Richards: NAISys



BI 034 Tony Zador: How DNA and Evolution Can Inform AI





Related papers

Catalyzing next-generation Artificial Intelligence through NeuroAI.



Encoding innate ability through a genomic bottleneck.





Essays

NeuroAI: A field born from the symbiosis between neuroscience, AI.



What the brain can teach artificial neural networks.






Read the transcript.



0:00 - Intro
3:28 - "Neuro-AI"
12:48 - Visual cognition history
18:24 - Information theory in neuroscience
20:47 - Necessary steps for progress
24:34 - Neuro-AI models and cognition
35:47 - Animals for inspiring AI
41:48 - What we want AI to do
46:01 - Development and AI
59:03 - Robots
1:25:10 - Catalyzing the next generation of AI]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2024/11/198-Tony-Zador-website.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:35:04</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.





Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released: https://www.thetransmitter.org/newsletters/



To explore more neuroscience news and perspectives, visit thetransmitter.org.



Tony Zador runs the Zador lab at Cold Spring Harbor Laboratory. You've heard him on Brain Inspired a few times in the past, most recently in a panel discussion I moderated at this past COSYNE conference - a conference Tony co-founded 20 years ago. As you'll hear, Tony's current and past interests and research endeavors are of a wide var]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2024/11/198-Tony-Zador-website.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 197 Karen Adolph: How Babies Learn to Move and Think</title>
	<link>https://braininspired.co/podcast/197/</link>
	<pubDate>Fri, 25 Oct 2024 04:00:00 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2383</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>





<p>Sign up for the <a href="https://www.thetransmitter.org/newsletters/">“Brain Inspired” email alerts</a> to be notified every time a new “Brain Inspired” episode is released<a href="https://www.thetransmitter.org/newsletters/">.</a></p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p>Karen Adolph runs the <a href="https://www.nyuactionlab.com/">Infant Action Lab</a> at NYU, where she studies how our motor behaviors develop from infancy onward. We discuss how observing babies at different stages of development illuminates how movement and cognition develop in humans, how variability and embodiment are key to that development, and the importance of studying behavior in real-world settings as opposed to restricted laboratory settings. We also explore how these principles and simulations can inspire advances in intelligent robots. Karen has a long-standing interest in ecological psychology, and she shares some stories of her time studying under Eleanor Gibson and other mentors.</p>





<p>Finally, we get a surprise visit from her partner Mark Blumberg, with whom she co-authored an opinion piece arguing that "motor cortex" doesn't start off with a motor function, oddly enough, but instead processes sensory information during the first period of animals' lives.</p>



<ul class="wp-block-list">
<li><a href="https://www.nyuactionlab.com/">Infant Action Lab</a> (Karen Adolph's lab)</li>



<li><a href="https://blumberg.lab.uiowa.edu/">Sleep and Behavioral Development Lab</a> (Mark Blumberg's lab)</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.annualreviews.org/content/journals/10.1146/annurev-psych-010418-102836">Motor Development: Embodied, Embedded, Enculturated, and Enabling</a></li>



<li><a href="https://www.jstor.org/stable/26871620">An Ecological Approach to Learning in (Not and) Development</a></li>



<li><a href="https://75f52ccb-a7bc-4767-bded-13af6094bc0e.usrfiles.com/ugd/75f52c_cf43c0b3cdf9496791bf515f54d4b7cc.pdf">An update of the development of motor behavior</a></li>



<li><a href="https://drive.google.com/file/d/1QjNdXakZAZyy0FsuBWoinzDQMi5uuz3j/view">Protracted development of motor cortex constrains rich interpretations of infant cognition</a></li>
</ul>
</li>
</ul>



<p>Read the <a href="https://www.thetransmitter.org/wp-content/uploads/2024/10/BI-197-final-1.pdf" target="_blank" rel="noreferrer noopener">transcript</a>.</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Vi]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p><em>The Transmitter</em> is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>





<p>Sign up for the <a href="https://www.thetransmitter.org/newsletters/">“Brain Inspired” email alerts</a> to be notified every time a new “Brain Inspired” episode is released<a href="https://www.thetransmitter.org/newsletters/">.</a></p>



<p>To explore more neuroscience news and perspectives, visit <a href="http://thetransmitter.org">thetransmitter.org</a>.</p>



<p>Karen Adolph runs the <a href="https://www.nyuactionlab.com/">Infant Action Lab</a> at NYU, where she studies how our motor behaviors develop from infancy onward. We discuss how observing babies at different stages of development illuminates how movement and cognition develop in humans, how variability and embodiment are key to that development, and the importance of studying behavior in real-world settings as opposed to restricted laboratory settings. We also explore how these principles and simulations can inspire advances in intelligent robots. Karen has a long-standing interest in ecological psychology, and she shares some stories of her time studying under Eleanor Gibson and other mentors.</p>





<p>Finally, we get a surprise visit from her partner Mark Blumberg, with whom she co-authored an opinion piece arguing that "motor cortex" doesn't start off with a motor function, oddly enough, but instead processes sensory information during the first period of animals' lives.</p>



<ul class="wp-block-list">
<li><a href="https://www.nyuactionlab.com/">Infant Action Lab</a> (Karen Adolph's lab)</li>



<li><a href="https://blumberg.lab.uiowa.edu/">Sleep and Behavioral Development Lab</a> (Mark Blumberg's lab)</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.annualreviews.org/content/journals/10.1146/annurev-psych-010418-102836">Motor Development: Embodied, Embedded, Enculturated, and Enabling</a></li>



<li><a href="https://www.jstor.org/stable/26871620">An Ecological Approach to Learning in (Not and) Development</a></li>



<li><a href="https://75f52ccb-a7bc-4767-bded-13af6094bc0e.usrfiles.com/ugd/75f52c_cf43c0b3cdf9496791bf515f54d4b7cc.pdf">An update of the development of motor behavior</a></li>



<li><a href="https://drive.google.com/file/d/1QjNdXakZAZyy0FsuBWoinzDQMi5uuz3j/view">Protracted development of motor cortex constrains rich interpretations of infant cognition</a></li>
</ul>
</li>
</ul>



<p>Read the <a href="https://www.thetransmitter.org/wp-content/uploads/2024/10/BI-197-final-1.pdf" target="_blank" rel="noreferrer noopener">transcript</a>.</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2383/197.mp3" length="86222937" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.





Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.



Karen Adolph runs the Infant Action Lab at NYU, where she studies how our motor behaviors develop from infancy onward. We discuss how observing babies at different stages of development illuminates how movement and cognition develop in humans, how variability and embodiment are key to that development, and the importance of studying behavior in real-world settings as opposed to restricted laboratory settings. We also explore how these principles and simulations can inspire advances in intelligent robots. Karen has a long-standing interest in ecological psychology, and she shares some stories of her time studying under Eleanor Gibson and other mentors.





Finally, we get a surprise visit from her partner Mark Blumberg, with whom she co-authored an opinion piece arguing that "motor cortex" doesn't start off with a motor function, oddly enough, but instead processes sensory information during the first period of animals' lives.




Infant Action Lab (Karen Adolph's lab)



Sleep and Behavioral Development Lab (Mark Blumberg's lab)



Related papers

Motor Development: Embodied, Embedded, Enculturated, and Enabling



An Ecological Approach to Learning in (Not and) Development



An update of the development of motor behavior



Protracted development of motor cortex constrains rich interpretations of infant cognition






Read the transcript.]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2024/10/197-thumbnail-final-website.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:29:31</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.



Read more about our partnership.





Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.



Karen Adolph runs the Infant Action Lab at NYU, where she studies how our motor behaviors develop from infancy onward. We discuss how observing babies at different stages of development illuminates how movement and cognition develop in humans, how variability and embodiment are key to that development, and the importance of studying behavior in real-world settings as opposed to res]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2024/10/197-thumbnail-final-website.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 196 Cristina Savin and Tim Vogels with Gaute Einevoll and Mikkel Lepperød</title>
	<link>https://braininspired.co/podcast/196/</link>
	<pubDate>Fri, 11 Oct 2024 04:00:00 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2376</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter </em>is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit <a href="http://thetransmitter.org/">thetransmitter.org</a> to explore the latest neuroscience news and perspectives, written by journalists and scientists.&nbsp;</p>





<p>This is the second conversation I had while teamed up with Gaute Einevoll at <a href="https://www.mn.uio.no/ccse/english/about/news-and-events/news/navigating-the-future-of-neuroai.html">a workshop on NeuroAI</a> in Norway. In this episode, Gaute and I are joined by <a href="https://csavin.wixsite.com/savinlab">Cristina Savin</a> and <a href="https://vogelslab.org/people/">Tim Vogels</a>. Cristina shares how her lab uses recurrent neural networks to study learning, while Tim talks about his long-standing research on synaptic plasticity and how AI tools are now helping to explore the vast space of possible plasticity rules.</p>



<p>We touch on how deep learning has changed the landscape, enhancing our research but also creating challenges with the "fashion-driven" nature of science today. We also reflect on how these new tools have changed the way we think about brain function without fundamentally altering the structure of our questions.</p>



<p>Be sure to check out Gaute's <a href="https://theoreticalneuroscience.no/">Theoretical Neuroscience</a> podcast as well!</p>







<ul class="wp-block-list">
<li><a href="https://lepmik.github.io/">Mikkel Lepperød</a></li>



<li><a href="https://csavin.wixsite.com/savinlab">Cristina Savin</a></li>



<li><a href="https://vogelslab.org/people/">Tim Vogels</a>
<ul class="wp-block-list">
<li>Twitter: <a href="https://x.com/tpvogels">@TPVogels</a></li>
</ul>
</li>



<li><a href="https://www.mn.uio.no/fysikk/english/people/aca/geinevol/index.html">Gaute Einevoll</a>
<ul class="wp-block-list">
<li>Twitter: <a href="https://x.com/GauteEinevoll">@GauteEinevoll</a></li>



<li>Gaute's <a href="https://theoreticalneuroscience.no/">Theoretical Neuroscience</a> podcast.</li>
</ul>
</li>



<li><a href="https://www.mn.uio.no/ccse/english/about/news-and-events/events/2024/neuro-ai-workshop.html" target="_blank" rel="noreferrer noopener">Validating models: How would success in NeuroAI look like?</a></li>
</ul>



<p><a href="https://www.thetransmitter.org/wp-content/uploads/2024/10/BI-196-final.pdf">Read the transcript</a>, provided by <em>The Transmitter.</em></p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. ]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter </em>is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit <a href="http://thetransmitter.org/">thetransmitter.org</a> to explore the latest neuroscience news and perspectives, written by journalists and scientists.&nbsp;</p>





<p>This is the second conversation I had while teamed up with Gaute Einevoll at <a href="https://www.mn.uio.no/ccse/english/about/news-and-events/news/navigating-the-future-of-neuroai.html">a workshop on NeuroAI</a> in Norway. In this episode, Gaute and I are joined by <a href="https://csavin.wixsite.com/savinlab">Cristina Savin</a> and <a href="https://vogelslab.org/people/">Tim Vogels</a>. Cristina shares how her lab uses recurrent neural networks to study learning, while Tim talks about his long-standing research on synaptic plasticity and how AI tools are now helping to explore the vast space of possible plasticity rules.</p>



<p>We touch on how deep learning has changed the landscape, enhancing our research but also creating challenges with the "fashion-driven" nature of science today. We also reflect on how these new tools have changed the way we think about brain function without fundamentally altering the structure of our questions.</p>



<p>Be sure to check out Gaute's <a href="https://theoreticalneuroscience.no/">Theoretical Neuroscience</a> podcast as well!</p>







<ul class="wp-block-list">
<li><a href="https://lepmik.github.io/">Mikkel Lepperød</a></li>



<li><a href="https://csavin.wixsite.com/savinlab">Cristina Savin</a></li>



<li><a href="https://vogelslab.org/people/">Tim Vogels</a>
<ul class="wp-block-list">
<li>Twitter: <a href="https://x.com/tpvogels">@TPVogels</a></li>
</ul>
</li>



<li><a href="https://www.mn.uio.no/fysikk/english/people/aca/geinevol/index.html">Gaute Einevoll</a>
<ul class="wp-block-list">
<li>Twitter: <a href="https://x.com/GauteEinevoll">@GauteEinevoll</a></li>



<li>Gaute's <a href="https://theoreticalneuroscience.no/">Theoretical Neuroscience</a> podcast.</li>
</ul>
</li>



<li><a href="https://www.mn.uio.no/ccse/english/about/news-and-events/events/2024/neuro-ai-workshop.html" target="_blank" rel="noreferrer noopener">Validating models: How would success in NeuroAI look like?</a></li>
</ul>



<p><a href="https://www.thetransmitter.org/wp-content/uploads/2024/10/BI-196-final.pdf">Read the transcript</a>, provided by <em>The Transmitter.</em></p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2376/196.mp3" length="90966565" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.&nbsp;





This is the second conversation I had while teamed up with Gaute Einevoll at a workshop on NeuroAI in Norway. In this episode, Gaute and I are joined by Cristina Savin and Tim Vogels. Cristina shares how her lab uses recurrent neural networks to study learning, while Tim talks about his long-standing research on synaptic plasticity and how AI tools are now helping to explore the vast space of possible plasticity rules.



We touch on how deep learning has changed the landscape, enhancing our research but also creating challenges with the "fashion-driven" nature of science today. We also reflect on how these new tools have changed the way we think about brain function without fundamentally altering the structure of our questions.



Be sure to check out Gaute's Theoretical Neuroscience podcast as well!








Mikkel Lepperød



Cristina Savin



Tim Vogels

Twitter: @TPVogels





Gaute Einevoll

Twitter: @GauteEinevoll



Gaute's Theoretical Neuroscience podcast.





Validating models: How would success in NeuroAI look like?




Read the transcript, provided by The Transmitter.]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2024/10/196-Artboard-website.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:19:40</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.&nbsp;





This is the second conversation I had while teamed up with Gaute Einevoll at a workshop on NeuroAI in Norway. In this episode, Gaute and I are joined by Cristina Savin and Tim Vogels. Cristina shares how her lab uses recurrent neural networks to study learning, while Tim talks about his long-standing research on synaptic plasticity and how AI tools are now helping to explore the vast space of possible plasticity rules.



We touch on how deep learning has changed the landscape, enhancing our research but also creating challenges with the "fashion-driven" nature of science today. We also reflect on how the]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2024/10/196-Artboard-website.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 195 Ken Harris and Andreas Tolias with Gaute Einevoll and Mikkel Lepperød</title>
	<link>https://braininspired.co/podcast/195/</link>
	<pubDate>Tue, 08 Oct 2024 04:00:00 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2373</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p><em>The Transmitter </em>is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit <a href="http://thetransmitter.org/">thetransmitter.org</a> to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>





<p>This is the first of two less usual episodes. I was recently in Norway at a NeuroAI workshop called <a href="https://www.mn.uio.no/ccse/english/about/news-and-events/news/navigating-the-future-of-neuroai.html" target="_blank" rel="noreferrer noopener">Validating models: How would success in NeuroAI look like?</a> What follows are a few recordings I made with my friend Gaute Einevoll. Gaute has <a href="https://braininspired.co/podcast/148/">been on this podcast before</a>, but more importantly he started his own podcast a while back called <a href="https://theoreticalneuroscience.no/">Theoretical Neuroscience</a>, which you should check out.</p>



<p>Gaute and I introduce the episode, then briefly speak with <a href="https://www.mn.uio.no/ibv/english/people/aca/bjornmik/index.html">Mikkel Lepperød</a>, one of the organizers of the workshop. In this first episode, we're then joined by <a href="https://www.ucl.ac.uk/cortexlab/">Ken Harris</a> and <a href="https://toliaslab.org/">Andreas Tolias</a> to discuss how AI has influenced their research, thoughts about brains and minds, and progress and productivity.</p>



<ul class="wp-block-list">
<li><a href="https://www.mn.uio.no/ccse/english/about/news-and-events/events/2024/neuro-ai-workshop.html">Validating models: How would success in NeuroAI look like?</a></li>



<li><a href="https://lepmik.github.io/">Mikkel Lepperød</a></li>



<li><a href="https://toliaslab.org/">Andreas Tolias</a>
<ul class="wp-block-list">
<li>Twitter: <a href="https://x.com/AToliasLab">@AToliasLab</a></li>
</ul>
</li>



<li><a href="https://www.ucl.ac.uk/cortexlab/">Ken Harris</a>
<ul class="wp-block-list">
<li>Twitter: <a href="https://x.com/kennethd_harris">@kennethd_harris</a></li>
</ul>
</li>



<li><a href="https://www.mn.uio.no/fysikk/english/people/aca/geinevol/index.html">Gaute Einevoll</a>
<ul class="wp-block-list">
<li>Twitter: <a href="https://x.com/GauteEinevoll">@GauteEinevoll</a></li>



<li>Gaute's <a href="https://theoreticalneuroscience.no/">Theoretical Neuroscience</a> podcast.</li>
</ul>
</li>
</ul>



<p><a href="https://www.thetransmitter.org/wp-content/uploads/2024/10/BI-195-transcript-final_REVISED.pdf">Read the transcript</a>, provided by <em>The Transmitter</em>.</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Vi]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p><em>The Transmitter </em>is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit <a href="http://thetransmitter.org/">thetransmitter.org</a> to explore the latest neuroscience news and perspectives, written by journalists and scientists.</p>





<p>This is the first of two less usual episodes. I was recently in Norway at a NeuroAI workshop called <a href="https://www.mn.uio.no/ccse/english/about/news-and-events/news/navigating-the-future-of-neuroai.html" target="_blank" rel="noreferrer noopener">Validating models: How would success in NeuroAI look like?</a> What follows are a few recordings I made with my friend Gaute Einevoll. Gaute has <a href="https://braininspired.co/podcast/148/">been on this podcast before</a>, but more importantly he started his own podcast a while back called <a href="https://theoreticalneuroscience.no/">Theoretical Neuroscience</a>, which you should check out.</p>



<p>Gaute and I introduce the episode, then briefly speak with <a href="https://www.mn.uio.no/ibv/english/people/aca/bjornmik/index.html">Mikkel Lepperød</a>, one of the organizers of the workshop. In this first episode, we're then joined by <a href="https://www.ucl.ac.uk/cortexlab/">Ken Harris</a> and <a href="https://toliaslab.org/">Andreas Tolias</a> to discuss how AI has influenced their research, thoughts about brains and minds, and progress and productivity.</p>



<ul class="wp-block-list">
<li><a href="https://www.mn.uio.no/ccse/english/about/news-and-events/events/2024/neuro-ai-workshop.html">Validating models: How would success in NeuroAI look like?</a></li>



<li><a href="https://lepmik.github.io/">Mikkel Lepperød</a></li>



<li><a href="https://toliaslab.org/">Andreas Tolias</a>
<ul class="wp-block-list">
<li>Twitter: <a href="https://x.com/AToliasLab">@AToliasLab</a></li>
</ul>
</li>



<li><a href="https://www.ucl.ac.uk/cortexlab/">Ken Harris</a>
<ul class="wp-block-list">
<li>Twitter: <a href="https://x.com/kennethd_harris">@kennethd_harris</a></li>
</ul>
</li>



<li><a href="https://www.mn.uio.no/fysikk/english/people/aca/geinevol/index.html">Gaute Einevoll</a>
<ul class="wp-block-list">
<li>Twitter: <a href="https://x.com/GauteEinevoll">@GauteEinevoll</a></li>



<li>Gaute's <a href="https://theoreticalneuroscience.no/">Theoretical Neuroscience</a> podcast.</li>
</ul>
</li>
</ul>



<p><a href="https://www.thetransmitter.org/wp-content/uploads/2024/10/BI-195-transcript-final_REVISED.pdf">Read the transcript</a>, provided by <em>The Transmitter</em>.</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2373/195.mp3" length="74292660" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.





This is the first of two less usual episodes. I was recently in Norway at a NeuroAI workshop called Validating models: How would success in NeuroAI look like? What follows are a few recordings I made with my friend Gaute Einevoll. Gaute has been on this podcast before, but more importantly he started his own podcast a while back called Theoretical Neuroscience, which you should check out.



Gaute and I introduce the episode, then briefly speak with Mikkel Lepperød, one of the organizers of the workshop. In this first episode, we're then joined by Ken Harris and Andreas Tolias to discuss how AI has influenced their research, thoughts about brains and minds, and progress and productivity.




Validating models: How would success in NeuroAI look like?



Mikkel Lepperød



Andreas Tolias

Twitter: @AToliasLab





Ken Harris

Twitter: @kennethd_harris





Gaute Einevoll

Twitter: @GauteEinevoll



Gaute's Theoretical Neuroscience podcast.






Read the transcript, provided by The Transmitter.]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2024/10/195Artboard-4-website.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:17:05</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.





This is the first of two less usual episodes. I was recently in Norway at a NeuroAI workshop called Validating models: How would success in NeuroAI look like? What follows are a few recordings I made with my friend Gaute Einevoll. Gaute has been on this podcast before, but more importantly he started his own podcast a while back called Theoretical Neuroscience, which you should check out.



Gaute and I introduce the episode, then briefly speak with Mikkel Lepperød, one of the organizers of the workshop. In this first episode, we're then joined by Ken Harris and Andreas Tolias to discuss how AI has influenced the]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2024/10/195Artboard-4-website.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 194 Vijay Namboodiri &#038; Ali Mohebi: Dopamine Keeps Getting More Interesting</title>
	<link>https://braininspired.co/podcast/194/</link>
	<pubDate>Fri, 27 Sep 2024 04:14:00 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2368</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>




https://youtu.be/lbKEOdbeqHo




<p><em>The Transmitter </em>is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit <a href="http://thetransmitter.org/">thetransmitter.org</a> to explore the latest neuroscience news and perspectives, written by journalists and scientists.&nbsp;</p>











<p>The Transmitter has provided a transcript for this episode.</p>



<p>Vijay Namoodiri runs the Nam Lab at the University of California San Francisco, and Ali Mojebi is an assistant professor at the University of Wisconsin-Madison. Ali as been on the podcast before a few times, and he's interested in how neuromodulators like dopamine affect our cognition. And it was Ali who pointed me to Vijay, because of some recent work Vijay has done reassessing how dopamine might function differently than what has become the classic story of dopamine's function as it pertains to learning. The classic story is that dopamine is related to reward prediction errors. That is, dopamine is modulated when you expect reward and don't get it, and/or when you don't expect reward but do get it. Vijay calls this a "prospective" account of dopamine function, since it requires an animal to look into the future to expect a reward. Vijay has shown, however, that a retrospective account of dopamine might better explain lots of know behavioral data. This retrospective account links dopamine to how we understand causes and effects in our ongoing behavior. So in this episode, Vijay gives us a history lesson about dopamine, his newer story and why it has caused a bit of controversy, and how all of this came to be.</p>



<p>I happened to be looking at the Transmitter the other day, after I recorded this episode, and low and behold, there was an article titles <a href="https://www.thetransmitter.org/dopamine/reconstructing-dopamines-link-to-reward/">Reconstructing dopamine’s link to reward</a>. Vijay is featured in the article among a handful of other thoughtful researchers who share their work and ideas about this very topic. Vijay wrote his own piece as well: <a href="https://www.thetransmitter.org/dopamine/dopamine-and-the-need-for-alternative-theories/">Dopamine and the need for alternative theories</a>. So check out those articles for more views on how the field is reconsidering how dopamine works.</p>



<ul class="wp-block-list">
<li><a href="https://www.namboodirilab.org/research">Nam Lab</a>.</li>



<li><a href="https://lab.mohebial.com/">Mohebi &amp; Associates (Ali's Lab)</a>.</li>



<li>Twitter:
<ul class="wp-block-list">
<li><a href="https://x.com/vijay_mkn">@vijay_mkn</a></li>



<li><a href="https://twitter.com/mohebial">@mohebial</a></li>
</ul>
</li>



<li>Transmitter
<ul class="wp-block-list">
<li><a href="https://www.thetransmitter.org/dopamine/dopamine-and-the-need-for-alternative-theories/">Dopamine and the need for alternative theories</a>.</li>



<li><a href="https://www.thetransmitter.org/dopamine/reconstructing-dopamines-link-to-reward/">Reconstructing dopamine’s link to reward</a>.</li>
</ul>
</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.science.org/stoken/author-tokens/ST-895/full">Mesolimbic dopamine release conveys causal associations</a>.</li>



<li><a href="https://www.science.org/doi/full/10.1126/sciadv.adn4203">Mesostriatal dopamine is sensitive to changes in specific cue-reward contingencies</a>.</li>



<li><a href="https://www.biorxiv.org/content/10.1101/2021.02.07.430001v1.abstract">What is the state space of the world for real animals?</a></li>



<li><a href="https://www.sciencedirect.com/science/article/pii/S0896627321007078">The learning of prospective and retrospective cognitive maps within neural circuits</a></li>
</ul>
</li>



<li>Further reading
<ul class="wp-block-list">
<li>(Ali's paper): <a href="https://www.nature.com/articles/s41593-023-01566-3">Dopamine transients follow a striatal gradient of reward time horizons.</a></li>



<li>Ali listed a bunch of work on local modulation of DA release:
<ul class="wp-block-list">
<li><a href="https://www.frontiersin.org/journals/behavioral-neuroscience/articles/10.3389/fnbeh.2014.00188/full">Local control of striatal dopamine release</a>.</li>



<li><a href="https://pubmed.ncbi.nlm.nih.gov/35931070/">Synaptic-like axo-axonal transmission from striatal cholinergic interneurons onto dopaminergic fibers</a>.</li>



<li><a href="https://pubmed.ncbi.nlm.nih.gov/33837376/">Spatial and temporal scales of dopamine transmission</a>.</li>



<li><a href="https://pubmed.ncbi.nlm.nih.gov/27141430/">Striatal dopamine neurotransmission: Regulation of release and uptake</a>.</li>



<li><a href="https://pubmed.ncbi.nlm.nih.gov/22794260/">Striatal Dopamine Release Is Triggered by Synchronized Activity in Cholinergic Interneurons</a>.</li>



<li><a href="https://www.science.org/doi/10.1126/science.abn0532">An action potential initiation mechanism in distal axons for the control of dopamine release</a>.</li>
</ul>
</li>
</ul>
</li>
</ul>



<p><a href="https://www.thetransmitter.org/wp-content/uploads/2024/09/BI-194-transcript-final.pdf">Read the transcript</a>, produced by <a href="http://thetransmitter.org/">The Transmitter</a>.</p>



<p>0:00 - Intro
3:42 - Dopamine: the history of theories
32:54 - Importance of learning and behavior studies
39:12 - Dopamine and causality
1:06:45 - Controversy over Vijay's findings</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.








https://youtu.be/lbKEOdbeqHo




The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neu]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>




https://youtu.be/lbKEOdbeqHo




<p><em>The Transmitter </em>is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit <a href="http://thetransmitter.org/">thetransmitter.org</a> to explore the latest neuroscience news and perspectives, written by journalists and scientists.&nbsp;</p>











<p>The Transmitter has provided a transcript for this episode.</p>



<p>Vijay Namoodiri runs the Nam Lab at the University of California San Francisco, and Ali Mojebi is an assistant professor at the University of Wisconsin-Madison. Ali as been on the podcast before a few times, and he's interested in how neuromodulators like dopamine affect our cognition. And it was Ali who pointed me to Vijay, because of some recent work Vijay has done reassessing how dopamine might function differently than what has become the classic story of dopamine's function as it pertains to learning. The classic story is that dopamine is related to reward prediction errors. That is, dopamine is modulated when you expect reward and don't get it, and/or when you don't expect reward but do get it. Vijay calls this a "prospective" account of dopamine function, since it requires an animal to look into the future to expect a reward. Vijay has shown, however, that a retrospective account of dopamine might better explain lots of know behavioral data. This retrospective account links dopamine to how we understand causes and effects in our ongoing behavior. So in this episode, Vijay gives us a history lesson about dopamine, his newer story and why it has caused a bit of controversy, and how all of this came to be.</p>



<p>I happened to be looking at the Transmitter the other day, after I recorded this episode, and low and behold, there was an article titles <a href="https://www.thetransmitter.org/dopamine/reconstructing-dopamines-link-to-reward/">Reconstructing dopamine’s link to reward</a>. Vijay is featured in the article among a handful of other thoughtful researchers who share their work and ideas about this very topic. Vijay wrote his own piece as well: <a href="https://www.thetransmitter.org/dopamine/dopamine-and-the-need-for-alternative-theories/">Dopamine and the need for alternative theories</a>. So check out those articles for more views on how the field is reconsidering how dopamine works.</p>



<ul class="wp-block-list">
<li><a href="https://www.namboodirilab.org/research">Nam Lab</a>.</li>



<li><a href="https://lab.mohebial.com/">Mohebi &amp; Associates (Ali's Lab)</a>.</li>



<li>Twitter:
<ul class="wp-block-list">
<li><a href="https://x.com/vijay_mkn">@vijay_mkn</a></li>



<li><a href="https://twitter.com/mohebial">@mohebial</a></li>
</ul>
</li>



<li>Transmitter
<ul class="wp-block-list">
<li><a href="https://www.thetransmitter.org/dopamine/dopamine-and-the-need-for-alternative-theories/">Dopamine and the need for alternative theories</a>.</li>



<li><a href="https://www.thetransmitter.org/dopamine/reconstructing-dopamines-link-to-reward/">Reconstructing dopamine’s link to reward</a>.</li>
</ul>
</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.science.org/stoken/author-tokens/ST-895/full">Mesolimbic dopamine release conveys causal associations</a>.</li>



<li><a href="https://www.science.org/doi/full/10.1126/sciadv.adn4203">Mesostriatal dopamine is sensitive to changes in specific cue-reward contingencies</a>.</li>



<li><a href="https://www.biorxiv.org/content/10.1101/2021.02.07.430001v1.abstract">What is the state space of the world for real animals?</a></li>



<li><a href="https://www.sciencedirect.com/science/article/pii/S0896627321007078">The learning of prospective and retrospective cognitive maps within neural circuits</a></li>
</ul>
</li>



<li>Further reading
<ul class="wp-block-list">
<li>(Ali's paper): <a href="https://www.nature.com/articles/s41593-023-01566-3">Dopamine transients follow a striatal gradient of reward time horizons.</a></li>



<li>Ali listed a bunch of work on local modulation of DA release:
<ul class="wp-block-list">
<li><a href="https://www.frontiersin.org/journals/behavioral-neuroscience/articles/10.3389/fnbeh.2014.00188/full">Local control of striatal dopamine release</a>.</li>



<li><a href="https://pubmed.ncbi.nlm.nih.gov/35931070/">Synaptic-like axo-axonal transmission from striatal cholinergic interneurons onto dopaminergic fibers</a>.</li>



<li><a href="https://pubmed.ncbi.nlm.nih.gov/33837376/">Spatial and temporal scales of dopamine transmission</a>.</li>



<li><a href="https://pubmed.ncbi.nlm.nih.gov/27141430/">Striatal dopamine neurotransmission: Regulation of release and uptake</a>.</li>



<li><a href="https://pubmed.ncbi.nlm.nih.gov/22794260/">Striatal Dopamine Release Is Triggered by Synchronized Activity in Cholinergic Interneurons</a>.</li>



<li><a href="https://www.science.org/doi/10.1126/science.abn0532">An action potential initiation mechanism in distal axons for the control of dopamine release</a>.</li>
</ul>
</li>
</ul>
</li>
</ul>



<p><a href="https://www.thetransmitter.org/wp-content/uploads/2024/09/BI-194-transcript-final.pdf">Read the transcript</a>, produced by <a href="http://thetransmitter.org/">The Transmitter</a>.</p>



<p>0:00 - Intro
3:42 - Dopamine: the history of theories
32:54 - Importance of learning and behavior studies
39:12 - Dopamine and causality
1:06:45 - Controversy over Vijay's findings</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2368/194.mp3" length="94118783" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.








https://youtu.be/lbKEOdbeqHo




The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.&nbsp;











The Transmitter has provided a transcript for this episode.



Vijay Namoodiri runs the Nam Lab at the University of California San Francisco, and Ali Mojebi is an assistant professor at the University of Wisconsin-Madison. Ali as been on the podcast before a few times, and he's interested in how neuromodulators like dopamine affect our cognition. And it was Ali who pointed me to Vijay, because of some recent work Vijay has done reassessing how dopamine might function differently than what has become the classic story of dopamine's function as it pertains to learning. The classic story is that dopamine is related to reward prediction errors. That is, dopamine is modulated when you expect reward and don't get it, and/or when you don't expect reward but do get it. Vijay calls this a "prospective" account of dopamine function, since it requires an animal to look into the future to expect a reward. Vijay has shown, however, that a retrospective account of dopamine might better explain lots of know behavioral data. This retrospective account links dopamine to how we understand causes and effects in our ongoing behavior. So in this episode, Vijay gives us a history lesson about dopamine, his newer story and why it has caused a bit of controversy, and how all of this came to be.



I happened to be looking at the Transmitter the other day, after I recorded this episode, and low and behold, there was an article titles Reconstructing dopamine’s link to reward. Vijay is featured in the article among a handful of other thoughtful researchers who share their work and ideas about this very topic. Vijay wrote his own piece as well: Dopamine and the need for alternative theories. So check out those articles for more views on how the field is reconsidering how dopamine works.




Nam Lab.



Mohebi &amp; Associates (Ali's Lab).



Twitter:

@vijay_mkn



@mohebial





Transmitter

Dopamine and the need for alternative theories.



Reconstructing dopamine’s link to reward.





Related papers

Mesolimbic dopamine release conveys causal associations.



Mesostriatal dopamine is sensitive to changes in specific cue-reward contingencies.



What is the state space of the world for real animals?



The learning of prospective and retrospective cognitive maps within neural circuits





Further reading

(Ali's paper): Dopamine transients follow a striatal gradient of reward time horizons.



Ali listed a bunch of work on local modulation of DA release:

Local control of striatal dopamine release.



Synaptic-like axo-axonal transmission from striatal cholinergic interneurons onto dopaminergic fibers.



Spatial and temporal scales of dopamine transmission.



Striatal dopamine neurotransmission: Regulation of release and uptake.



Striatal Dopamine Release Is Triggered by Synchronized Activity in Cholinergic Interneurons.



An action potential initiation mechanism in distal axons for the control of dopamine release.








Read the transcript, produced by The Transmitter.



0:00 - Intro
3:42 - Dopamine: the history of theories
32:54 - Importance of learning and behavior studies
39:12 - Dopamine and causality
1:06:45 - Controversy over Vijay's findings]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2024/09/thunb-website.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:37:21</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.








https://youtu.be/lbKEOdbeqHo




The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.&nbsp;











The Transmitter has provided a transcript for this episode.



Vijay Namoodiri runs the Nam Lab at the University of California San Francisco, and Ali Mojebi is an assistant professor at the University of Wisconsin-Madison. Ali as been on the podcast before a few times, and he's interested in how neuromodulators like dopamine affect our cognition. And it was Ali who pointed me to Vijay, because of some recent work Vijay has done reassessing how dopamine might function differently than what has become the classic story of dopamine's function as it pertains to learning.]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2024/09/thunb-website.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 193 Kim Stachenfeld: Enhancing Neuroscience and AI</title>
	<link>https://braininspired.co/podcast/193/</link>
	<pubDate>Wed, 11 Sep 2024 10:36:38 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2355</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter </em>is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit <a href="http://thetransmitter.org/">thetransmitter.org</a> to explore the latest neuroscience news and perspectives, written by journalists and scientists.&nbsp;</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Check out this story:&nbsp; <strong><a href="https://www.thetransmitter.org/cognitive-neuroscience/monkeys-build-mental-maps-to-navigate-new-tasks/">Monkeys build mental maps to navigate new tasks</a> </strong></p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">“Brain Inspired” email alerts</a> to be notified every time a new “Brain Inspired” episode is released.</p>



<p>To explore more neuroscience news and perspectives, <strong>visit <a href="http://thetransmitter.org/">thetransmitter.org</a>.</strong></p>







<p>Kim Stachenfeld embodies the original core focus of this podcast, the exploration of the intersection between neuroscience and AI, now commonly known as Neuro-AI. That's because she walks both lines. Kim is a Senior Research Scientist at Google DeepMind, the AI company that sprang from neuroscience principles, and also does research at the Center for Theoretical Neuroscience at Columbia University. She's been using her expertise in modeling, and reinforcement learning, and cognitive maps, for example, to help understand brains and to help improve AI. I've been wanting to have her on for a long time to get her broad perspective on AI and neuroscience.</p>



<p>We discuss the relative roles of industry and academia in pursuing various objectives related to understanding and building cognitive entities</p>



<p>She's studied the hippocampus in her research on reinforcement learning and cognitive maps, so we discuss what the heck the hippocampus does since it seems to implicated in so many functions, and how she thinks of reinforcement learning these days.</p>



<p>Most recently Kim at Deepmind has focused on more practical engineering questions, using deep learning models to predict things like chaotic turbulent flows, and even to help design things like bridges and airplanes. And we don't get into the specifics of that work, but, given that I just spoke with Damian Kelty-Stephen, who thinks of brains partially as turbulent cascades, Kim and I discuss how her work on modeling turbulence has shaped her thoughts about brains.</p>



<ul class="wp-block-list">
<li><a href="https://neurokim.com/">Kim's website</a>.</li>



<li>Twitter:&nbsp;<a href="https://twitter.com/neuro_kim?ref_src=twsrc%5Etfw%7Ctwcamp%5Eembeddedtimeline%7Ctwterm%5Escreen-name%3Aneuro_kim%7Ctwcon%5Es2">@neuro_kim</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://arxiv.org/abs/2001.08361">Scaling Laws for Neural Language Models</a>.</li>



<li><a href="https://arxiv.org/abs/2206.07682">Emergent Abilities of Large Language Models</a>.</li>



<li>Learned simulators:
<ul class="wp-block-list">
<li><a href="https://arxiv.org/abs/2112.15275">Learned coarse models for efficient turbulence simulation</a>.</li>



<li><a href="https://arxiv.org/pdf/2202.00728">Physical design using differentiable learned simulators</a>.</li>
</ul>
</li>
</ul>
</li>
</ul>



<p>Check out <a href="https://www.thetransmitter.org/wp-content/uploads/2024/09/kim-stachenfeld-transcript-final.pdf" data-type="link" data-id="https://www.thetransmitter.org/wp-content/uploads/2024/09/kim-stachenfeld-transcript-final.pdf" target="_blank" rel="noreferrer noopener">the transcript</a>, provided by The Transmitter.</p>



<p>0:00 - Intro
4:31 - Deepmind's original and current vision
9:53 - AI as tools and models
12:53 - Has AI hindered neuroscience?
17:05 - Deepmind vs academic work balance
20:47 - Is industry better suited to understand brains?
24?42 - Trajectory of Deepmind
27:41 - Kim's trajectory
33:35 - Is the brain a ML entity?
36:12 - Hippocampus
44:12 - Reinforcement learning
51:32 - What does neuroscience need more and less of?
1:02:53 - Neuroscience in a weird place?
1:06:41 - How Kim's questions have changed
1:16:31 - Intelligence and LLMs
1:25:34 - Challenges</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. ]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p><em>The Transmitter </em>is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit <a href="http://thetransmitter.org/">thetransmitter.org</a> to explore the latest neuroscience news and perspectives, written by journalists and scientists.&nbsp;</p>



<p>Read more about <a href="https://www.thetransmitter.org/partners/">our partnership</a>.</p>



<p>Check out this story:&nbsp; <strong><a href="https://www.thetransmitter.org/cognitive-neuroscience/monkeys-build-mental-maps-to-navigate-new-tasks/">Monkeys build mental maps to navigate new tasks</a> </strong></p>



<p>Sign up for <a href="https://www.thetransmitter.org/newsletters/">“Brain Inspired” email alerts</a> to be notified every time a new “Brain Inspired” episode is released.</p>



<p>To explore more neuroscience news and perspectives, <strong>visit <a href="http://thetransmitter.org/">thetransmitter.org</a>.</strong></p>







<p>Kim Stachenfeld embodies the original core focus of this podcast, the exploration of the intersection between neuroscience and AI, now commonly known as Neuro-AI. That's because she walks both lines. Kim is a Senior Research Scientist at Google DeepMind, the AI company that sprang from neuroscience principles, and also does research at the Center for Theoretical Neuroscience at Columbia University. She's been using her expertise in modeling, and reinforcement learning, and cognitive maps, for example, to help understand brains and to help improve AI. I've been wanting to have her on for a long time to get her broad perspective on AI and neuroscience.</p>



<p>We discuss the relative roles of industry and academia in pursuing various objectives related to understanding and building cognitive entities</p>



<p>She's studied the hippocampus in her research on reinforcement learning and cognitive maps, so we discuss what the heck the hippocampus does since it seems to implicated in so many functions, and how she thinks of reinforcement learning these days.</p>



<p>Most recently Kim at Deepmind has focused on more practical engineering questions, using deep learning models to predict things like chaotic turbulent flows, and even to help design things like bridges and airplanes. And we don't get into the specifics of that work, but, given that I just spoke with Damian Kelty-Stephen, who thinks of brains partially as turbulent cascades, Kim and I discuss how her work on modeling turbulence has shaped her thoughts about brains.</p>



<ul class="wp-block-list">
<li><a href="https://neurokim.com/">Kim's website</a>.</li>



<li>Twitter:&nbsp;<a href="https://twitter.com/neuro_kim?ref_src=twsrc%5Etfw%7Ctwcamp%5Eembeddedtimeline%7Ctwterm%5Escreen-name%3Aneuro_kim%7Ctwcon%5Es2">@neuro_kim</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://arxiv.org/abs/2001.08361">Scaling Laws for Neural Language Models</a>.</li>



<li><a href="https://arxiv.org/abs/2206.07682">Emergent Abilities of Large Language Models</a>.</li>



<li>Learned simulators:
<ul class="wp-block-list">
<li><a href="https://arxiv.org/abs/2112.15275">Learned coarse models for efficient turbulence simulation</a>.</li>



<li><a href="https://arxiv.org/pdf/2202.00728">Physical design using differentiable learned simulators</a>.</li>
</ul>
</li>
</ul>
</li>
</ul>



<p>Check out <a href="https://www.thetransmitter.org/wp-content/uploads/2024/09/kim-stachenfeld-transcript-final.pdf" data-type="link" data-id="https://www.thetransmitter.org/wp-content/uploads/2024/09/kim-stachenfeld-transcript-final.pdf" target="_blank" rel="noreferrer noopener">the transcript</a>, provided by The Transmitter.</p>



<p>0:00 - Intro
4:31 - Deepmind's original and current vision
9:53 - AI as tools and models
12:53 - Has AI hindered neuroscience?
17:05 - Deepmind vs academic work balance
20:47 - Is industry better suited to understand brains?
24?42 - Trajectory of Deepmind
27:41 - Kim's trajectory
33:35 - Is the brain a ML entity?
36:12 - Hippocampus
44:12 - Reinforcement learning
51:32 - What does neuroscience need more and less of?
1:02:53 - Neuroscience in a weird place?
1:06:41 - How Kim's questions have changed
1:16:31 - Intelligence and LLMs
1:25:34 - Challenges</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2355/193.mp3" length="90412357" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.&nbsp;



Read more about our partnership.



Check out this story:&nbsp; Monkeys build mental maps to navigate new tasks 



Sign up for “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.







Kim Stachenfeld embodies the original core focus of this podcast, the exploration of the intersection between neuroscience and AI, now commonly known as Neuro-AI. That's because she walks both lines. Kim is a Senior Research Scientist at Google DeepMind, the AI company that sprang from neuroscience principles, and also does research at the Center for Theoretical Neuroscience at Columbia University. She's been using her expertise in modeling, and reinforcement learning, and cognitive maps, for example, to help understand brains and to help improve AI. I've been wanting to have her on for a long time to get her broad perspective on AI and neuroscience.



We discuss the relative roles of industry and academia in pursuing various objectives related to understanding and building cognitive entities



She's studied the hippocampus in her research on reinforcement learning and cognitive maps, so we discuss what the heck the hippocampus does since it seems to implicated in so many functions, and how she thinks of reinforcement learning these days.



Most recently Kim at Deepmind has focused on more practical engineering questions, using deep learning models to predict things like chaotic turbulent flows, and even to help design things like bridges and airplanes. And we don't get into the specifics of that work, but, given that I just spoke with Damian Kelty-Stephen, who thinks of brains partially as turbulent cascades, Kim and I discuss how her work on modeling turbulence has shaped her thoughts about brains.




Kim's website.



Twitter:&nbsp;@neuro_kim.



Related papers

Scaling Laws for Neural Language Models.



Emergent Abilities of Large Language Models.



Learned simulators:

Learned coarse models for efficient turbulence simulation.



Physical design using differentiable learned simulators.








Check out the transcript, provided by The Transmitter.



0:00 - Intro
4:31 - Deepmind's original and current vision
9:53 - AI as tools and models
12:53 - Has AI hindered neuroscience?
17:05 - Deepmind vs academic work balance
20:47 - Is industry better suited to understand brains?
24?42 - Trajectory of Deepmind
27:41 - Kim's trajectory
33:35 - Is the brain a ML entity?
36:12 - Hippocampus
44:12 - Reinforcement learning
51:32 - What does neuroscience need more and less of?
1:02:53 - Neuroscience in a weird place?
1:06:41 - How Kim's questions have changed
1:16:31 - Intelligence and LLMs
1:25:34 - Challenges]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2024/09/thumb-website.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:32:41</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.&nbsp;



Read more about our partnership.



Check out this story:&nbsp; Monkeys build mental maps to navigate new tasks 



Sign up for “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released.



To explore more neuroscience news and perspectives, visit thetransmitter.org.







Kim Stachenfeld embodies the original core focus of this podcast, the exploration of the intersection between neuroscience and AI, now commonly known as Neuro-AI. That's because she walks both lines. Kim is a Senior Research Scientist at Google DeepMind, the AI company that sprang from neuroscienc]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2024/09/thumb-website.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 192 Àlex Gómez-Marín: The Edges of Consciousness</title>
	<link>https://braininspired.co/podcast/192/</link>
	<pubDate>Wed, 28 Aug 2024 23:36:02 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2350</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>Àlex Gómez-Marín heads <a href="https://behavior-of-organisms.org/">The Behavior of Organisms Laboratory</a> at the Institute of Neuroscience in Alicante, Spain. He's one of those theoretical physicist turned neuroscientist, and he has studied a wide range of topics over his career. Most recently, he has become interested in what he calls the "edges of consciousness", which encompasses the many trying to explain what may be happening when we have experiences outside our normal everyday experiences. For example, when we are under the influence of hallucinogens, when have near-death experiences (as Alex has), paranormal experiences, and so on.</p>



<p>So we discuss what led up to his interests in these edges of consciousness, how he now thinks about consciousness and doing science in general, how important it is to make room for all possible explanations of phenomena, and to leave our metaphysics open all the while.</p>



<ul class="wp-block-list">
<li>Alex's website: <a href="https://behavior-of-organisms.org/">The Behavior of Organisms Laboratory</a>.</li>



<li>Twitter:&nbsp;<a href="https://twitter.com/behaviOrganisms">@behaviOrganisms</a>.</li>



<li>Previous episodes:
<ul class="wp-block-list">
<li><a href="https://braininspired.co/podcast/168/">BI 168 Frauke Sandig and Eric Black w Alex Gomez-Marin: AWARE: Glimpses of Consciousness</a>.</li>



<li><a href="https://braininspired.co/podcast/136/">BI 136 Michel Bitbol and Alex Gomez-Marin: Phenomenology</a>.</li>
</ul>
</li>



<li>Related:
<ul class="wp-block-list">
<li><a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10646881/">The Consciousness of Neuroscience</a>.</li>



<li><a href="https://iai.tv/articles/seeing-the-consciousness-forest-for-the-trees-auid-2901?_auid=2020">Seeing the consciousness forest for the trees</a>.</li>



<li><a href="https://www.nature.com/articles/d41586-024-02603-2.pdf">The stairway to transhumanist heaven</a>.</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
4:13 - Evolving viewpoints
10:05 - Near-death experience
18:30 - Mechanistic neuroscience vs. the rest
22:46 - Are you doing science?
33:46 - Where is my. mind?
44:55 - Productive vs. permissive brain
59:30 - Panpsychism
1:07:58 - Materialism
1:10:38 - How to choose what to do
1:16:54 - Fruit flies
1:19:52 - AI and the Singularity</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











Àlex Gómez-Marín heads The Behavior of Organisms Laboratory at the Institute of Neuroscience in Alicante, Spain. Hes one of those theoretical physicist turned]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>Àlex Gómez-Marín heads <a href="https://behavior-of-organisms.org/">The Behavior of Organisms Laboratory</a> at the Institute of Neuroscience in Alicante, Spain. He's one of those theoretical physicist turned neuroscientist, and he has studied a wide range of topics over his career. Most recently, he has become interested in what he calls the "edges of consciousness", which encompasses the many trying to explain what may be happening when we have experiences outside our normal everyday experiences. For example, when we are under the influence of hallucinogens, when have near-death experiences (as Alex has), paranormal experiences, and so on.</p>



<p>So we discuss what led up to his interests in these edges of consciousness, how he now thinks about consciousness and doing science in general, how important it is to make room for all possible explanations of phenomena, and to leave our metaphysics open all the while.</p>



<ul class="wp-block-list">
<li>Alex's website: <a href="https://behavior-of-organisms.org/">The Behavior of Organisms Laboratory</a>.</li>



<li>Twitter:&nbsp;<a href="https://twitter.com/behaviOrganisms">@behaviOrganisms</a>.</li>



<li>Previous episodes:
<ul class="wp-block-list">
<li><a href="https://braininspired.co/podcast/168/">BI 168 Frauke Sandig and Eric Black w Alex Gomez-Marin: AWARE: Glimpses of Consciousness</a>.</li>



<li><a href="https://braininspired.co/podcast/136/">BI 136 Michel Bitbol and Alex Gomez-Marin: Phenomenology</a>.</li>
</ul>
</li>



<li>Related:
<ul class="wp-block-list">
<li><a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10646881/">The Consciousness of Neuroscience</a>.</li>



<li><a href="https://iai.tv/articles/seeing-the-consciousness-forest-for-the-trees-auid-2901?_auid=2020">Seeing the consciousness forest for the trees</a>.</li>



<li><a href="https://www.nature.com/articles/d41586-024-02603-2.pdf">The stairway to transhumanist heaven</a>.</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
4:13 - Evolving viewpoints
10:05 - Near-death experience
18:30 - Mechanistic neuroscience vs. the rest
22:46 - Are you doing science?
33:46 - Where is my. mind?
44:55 - Productive vs. permissive brain
59:30 - Panpsychism
1:07:58 - Materialism
1:10:38 - How to choose what to do
1:16:54 - Fruit flies
1:19:52 - AI and the Singularity</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2350/192.mp3" length="88171545" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











Àlex Gómez-Marín heads The Behavior of Organisms Laboratory at the Institute of Neuroscience in Alicante, Spain. He's one of those theoretical physicist turned neuroscientist, and he has studied a wide range of topics over his career. Most recently, he has become interested in what he calls the "edges of consciousness", which encompasses the many trying to explain what may be happening when we have experiences outside our normal everyday experiences. For example, when we are under the influence of hallucinogens, when have near-death experiences (as Alex has), paranormal experiences, and so on.



So we discuss what led up to his interests in these edges of consciousness, how he now thinks about consciousness and doing science in general, how important it is to make room for all possible explanations of phenomena, and to leave our metaphysics open all the while.




Alex's website: The Behavior of Organisms Laboratory.



Twitter:&nbsp;@behaviOrganisms.



Previous episodes:

BI 168 Frauke Sandig and Eric Black w Alex Gomez-Marin: AWARE: Glimpses of Consciousness.



BI 136 Michel Bitbol and Alex Gomez-Marin: Phenomenology.





Related:

The Consciousness of Neuroscience.



Seeing the consciousness forest for the trees.



The stairway to transhumanist heaven.






0:00 - Intro
4:13 - Evolving viewpoints
10:05 - Near-death experience
18:30 - Mechanistic neuroscience vs. the rest
22:46 - Are you doing science?
33:46 - Where is my. mind?
44:55 - Productive vs. permissive brain
59:30 - Panpsychism
1:07:58 - Materialism
1:10:38 - How to choose what to do
1:16:54 - Fruit flies
1:19:52 - AI and the Singularity]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2024/08/thumb-website.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:30:34</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











Àlex Gómez-Marín heads The Behavior of Organisms Laboratory at the Institute of Neuroscience in Alicante, Spain. He's one of those theoretical physicist turned neuroscientist, and he has studied a wide range of topics over his career. Most recently, he has become interested in what he calls the "edges of consciousness", which encompasses the many trying to explain what may be happening when we have experiences outside our normal everyday experiences. For example, when we are under the influence of hallucinogens, when have near-death experiences (as Alex has), paranormal experiences, and so on.



So we discuss what led up to his interests in these edges of consciousness, how he now thinks about consciousness and doing science in general, how important it is to make room for all possible explanations of phenomena, and to leave our metaphysics open all the while.




Alex's website: The Behav]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2024/08/thumb-website.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 191 Damian Kelty-Stephen: Fractal Turbulent Cascading Intelligence</title>
	<link>https://braininspired.co/podcast/191/</link>
	<pubDate>Thu, 15 Aug 2024 13:31:11 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2347</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>Damian Kelty-Stephen is an experimental psychologist at State University of New York at New Paltz. Last episode with Luis Favela, we discussed many of the ideas from ecological psychology, and how Louie is trying to reconcile those principles with those of neuroscience. In this episode, Damian and I in some ways continue that discussion, because Damian is also interested in unifying principles of ecological psychology and neuroscience. However, he is approaching it from a different perspective that Louie. What drew me originally to Damian was a paper he put together with a bunch of authors offering their own alternatives to the computer metaphor of the brain, which has come to dominate neuroscience. And we discuss that some, and I'll link to the paper in the show notes. But mostly we discuss Damian's work studying the fractal structure of our behaviors, connecting that structure across scales, and linking it to how our brains and bodies interact to produce our behaviors. Along the way, we talk about his interests in cascades dynamics and turbulence to also explain our intelligence and behaviors. So, I hope you enjoy this alternative slice into thinking about how we think and move in our bodies and in the world.</p>



<ul class="wp-block-list">
<li><a href="https://sites.google.com/site/foovian/">Damian's website</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://arxiv.org/abs/2206.04603">In search for an alternative to the computer metaphor of the mind and brain</a>.</li>



<li><a href="https://arxiv.org/html/2401.05105v1">Multifractal emergent processes: Multiplicative interactions override nonlinear component properties</a>.</li>
</ul>
</li>
</ul>





<p>0:00 - Intro
2:34 - Damian's background
9:02 - Brains
12:56 - Do neuroscientists have it all wrong?
16:56 - Fractals everywhere
28:01 - Fractality, causality, and cascades
32:01 - Cascade instability as a metaphor for the brain
40:43 - Damian's worldview
46:09 - What is AI missing?
54:26 - Turbulence
1:01:02 - Intelligence without fractals? Multifractality
1:10:28 - Ergodicity
1:19:16 - Fractality, intelligence, life
1:23:24 - What's exciting, changing viewpoints</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











Damian Kelty-Stephen is an experimental psychologist at State University of New York at New Paltz. Last episode with Luis Favela, we discussed many of the ide]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>Damian Kelty-Stephen is an experimental psychologist at State University of New York at New Paltz. Last episode with Luis Favela, we discussed many of the ideas from ecological psychology, and how Louie is trying to reconcile those principles with those of neuroscience. In this episode, Damian and I in some ways continue that discussion, because Damian is also interested in unifying principles of ecological psychology and neuroscience. However, he is approaching it from a different perspective that Louie. What drew me originally to Damian was a paper he put together with a bunch of authors offering their own alternatives to the computer metaphor of the brain, which has come to dominate neuroscience. And we discuss that some, and I'll link to the paper in the show notes. But mostly we discuss Damian's work studying the fractal structure of our behaviors, connecting that structure across scales, and linking it to how our brains and bodies interact to produce our behaviors. Along the way, we talk about his interests in cascades dynamics and turbulence to also explain our intelligence and behaviors. So, I hope you enjoy this alternative slice into thinking about how we think and move in our bodies and in the world.</p>



<ul class="wp-block-list">
<li><a href="https://sites.google.com/site/foovian/">Damian's website</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://arxiv.org/abs/2206.04603">In search for an alternative to the computer metaphor of the mind and brain</a>.</li>



<li><a href="https://arxiv.org/html/2401.05105v1">Multifractal emergent processes: Multiplicative interactions override nonlinear component properties</a>.</li>
</ul>
</li>
</ul>





<p>0:00 - Intro
2:34 - Damian's background
9:02 - Brains
12:56 - Do neuroscientists have it all wrong?
16:56 - Fractals everywhere
28:01 - Fractality, causality, and cascades
32:01 - Cascade instability as a metaphor for the brain
40:43 - Damian's worldview
46:09 - What is AI missing?
54:26 - Turbulence
1:01:02 - Intelligence without fractals? Multifractality
1:10:28 - Ergodicity
1:19:16 - Fractality, intelligence, life
1:23:24 - What's exciting, changing viewpoints</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2347/191.mp3" length="85587027" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











Damian Kelty-Stephen is an experimental psychologist at State University of New York at New Paltz. Last episode with Luis Favela, we discussed many of the ideas from ecological psychology, and how Louie is trying to reconcile those principles with those of neuroscience. In this episode, Damian and I in some ways continue that discussion, because Damian is also interested in unifying principles of ecological psychology and neuroscience. However, he is approaching it from a different perspective that Louie. What drew me originally to Damian was a paper he put together with a bunch of authors offering their own alternatives to the computer metaphor of the brain, which has come to dominate neuroscience. And we discuss that some, and I'll link to the paper in the show notes. But mostly we discuss Damian's work studying the fractal structure of our behaviors, connecting that structure across scales, and linking it to how our brains and bodies interact to produce our behaviors. Along the way, we talk about his interests in cascades dynamics and turbulence to also explain our intelligence and behaviors. So, I hope you enjoy this alternative slice into thinking about how we think and move in our bodies and in the world.




Damian's website.



Related papers

In search for an alternative to the computer metaphor of the mind and brain.



Multifractal emergent processes: Multiplicative interactions override nonlinear component properties.








0:00 - Intro
2:34 - Damian's background
9:02 - Brains
12:56 - Do neuroscientists have it all wrong?
16:56 - Fractals everywhere
28:01 - Fractality, causality, and cascades
32:01 - Cascade instability as a metaphor for the brain
40:43 - Damian's worldview
46:09 - What is AI missing?
54:26 - Turbulence
1:01:02 - Intelligence without fractals? Multifractality
1:10:28 - Ergodicity
1:19:16 - Fractality, intelligence, life
1:23:24 - What's exciting, changing viewpoints]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2024/08/thumb-2-website.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:27:51</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











Damian Kelty-Stephen is an experimental psychologist at State University of New York at New Paltz. Last episode with Luis Favela, we discussed many of the ideas from ecological psychology, and how Louie is trying to reconcile those principles with those of neuroscience. In this episode, Damian and I in some ways continue that discussion, because Damian is also interested in unifying principles of ecological psychology and neuroscience. However, he is approaching it from a different perspective that Louie. What drew me originally to Damian was a paper he put together with a bunch of authors offering their own alternatives to the computer metaphor of the brain, which has come to dominate neuroscience. And we discuss that some, and I'll link to the paper in the show notes. But mostly we discuss Damian's work studying the fractal structure of our behaviors, connecting that structure across scal]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2024/08/thumb-2-website.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 190 Luis Favela: The Ecological Brain</title>
	<link>https://braininspired.co/podcast/190/</link>
	<pubDate>Wed, 31 Jul 2024 14:27:44 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2341</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Luis Favela is an Associate Professor at Indiana University Bloomington. He is part philosopher, part cognitive scientist, part many things, and on this episode we discuss his new book, <a href="https://amzn.to/3LbSgrI">The Ecological Brain: Unifying the Sciences of Brain, Body, and Environment</a>.</p>





<p>In the book, Louie presents his NeuroEcological Nexus Theory, or NExT, which, as the subtitle says, proposes a way forward to tie together our brains, our bodies, and the environment; namely it has a lot to do with the complexity sciences and manifolds, which we discuss. But the book doesn't just present his theory. Among other things, it presents a rich historical look into why ecological psychology and neuroscience haven't been exactly friendly over the years, in terms of how to explain our behaviors, the role of brains in those explanations, how to think about what minds are, and so on. And it suggests how the two fields can get over their differences and be friends moving forward. And I'll just say, it's written in a very accessible manner, gently guiding the reader through many of the core concepts and science that have shaped ecological psychology and neuroscience, and for that reason alone I highly it.</p>





<p>Ok, so we discuss a bunch of topics in the book, how Louie thinks, and Louie gives us some great background and historical lessons along the way.</p>



<ul class="wp-block-list">
<li><a href="https://luishfavela.wixsite.com/luishfavela">Luis' website</a>.</li>



<li>Book:
<ul class="wp-block-list">
<li><a href="https://amzn.to/3LbSgrI">The Ecological Brain: Unifying the Sciences of Brain, Body, and Environment</a></li>
</ul>
</li>
</ul>



<p>0:00 - Intro
7:05 - Louie's target with NEXT
20:37 - Ecological psychology and grid cells
22:06 - Why irreconcilable?
28:59 - Why hasn't ecological psychology evolved more?
47:13 - NExT
49:10 - Hypothesis 1
55:45 - Hypothesis 2
1:02:55 - Artificial intelligence and ecological psychology
1:16:33 - Manifolds
1:31:20 - Hypothesis 4: Body, low-D, Synergies
1:35:53 - Hypothesis 5: Mind emerges
1:36:23 - Hypothesis 6:</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Luis Favela is an Associate Professor at Indiana University Bloomington. He is part philosopher, part cognitive scientist, part many things, and on this episode]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Luis Favela is an Associate Professor at Indiana University Bloomington. He is part philosopher, part cognitive scientist, part many things, and on this episode we discuss his new book, <a href="https://amzn.to/3LbSgrI">The Ecological Brain: Unifying the Sciences of Brain, Body, and Environment</a>.</p>





<p>In the book, Louie presents his NeuroEcological Nexus Theory, or NExT, which, as the subtitle says, proposes a way forward to tie together our brains, our bodies, and the environment; namely it has a lot to do with the complexity sciences and manifolds, which we discuss. But the book doesn't just present his theory. Among other things, it presents a rich historical look into why ecological psychology and neuroscience haven't been exactly friendly over the years, in terms of how to explain our behaviors, the role of brains in those explanations, how to think about what minds are, and so on. And it suggests how the two fields can get over their differences and be friends moving forward. And I'll just say, it's written in a very accessible manner, gently guiding the reader through many of the core concepts and science that have shaped ecological psychology and neuroscience, and for that reason alone I highly it.</p>





<p>Ok, so we discuss a bunch of topics in the book, how Louie thinks, and Louie gives us some great background and historical lessons along the way.</p>



<ul class="wp-block-list">
<li><a href="https://luishfavela.wixsite.com/luishfavela">Luis' website</a>.</li>



<li>Book:
<ul class="wp-block-list">
<li><a href="https://amzn.to/3LbSgrI">The Ecological Brain: Unifying the Sciences of Brain, Body, and Environment</a></li>
</ul>
</li>
</ul>



<p>0:00 - Intro
7:05 - Louie's target with NEXT
20:37 - Ecological psychology and grid cells
22:06 - Why irreconcilable?
28:59 - Why hasn't ecological psychology evolved more?
47:13 - NExT
49:10 - Hypothesis 1
55:45 - Hypothesis 2
1:02:55 - Artificial intelligence and ecological psychology
1:16:33 - Manifolds
1:31:20 - Hypothesis 4: Body, low-D, Synergies
1:35:53 - Hypothesis 5: Mind emerges
1:36:23 - Hypothesis 6:</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2341/190.mp3" length="98281317" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Luis Favela is an Associate Professor at Indiana University Bloomington. He is part philosopher, part cognitive scientist, part many things, and on this episode we discuss his new book, The Ecological Brain: Unifying the Sciences of Brain, Body, and Environment.





In the book, Louie presents his NeuroEcological Nexus Theory, or NExT, which, as the subtitle says, proposes a way forward to tie together our brains, our bodies, and the environment; namely it has a lot to do with the complexity sciences and manifolds, which we discuss. But the book doesn't just present his theory. Among other things, it presents a rich historical look into why ecological psychology and neuroscience haven't been exactly friendly over the years, in terms of how to explain our behaviors, the role of brains in those explanations, how to think about what minds are, and so on. And it suggests how the two fields can get over their differences and be friends moving forward. And I'll just say, it's written in a very accessible manner, gently guiding the reader through many of the core concepts and science that have shaped ecological psychology and neuroscience, and for that reason alone I highly it.





Ok, so we discuss a bunch of topics in the book, how Louie thinks, and Louie gives us some great background and historical lessons along the way.




Luis' website.



Book:

The Ecological Brain: Unifying the Sciences of Brain, Body, and Environment






0:00 - Intro
7:05 - Louie's target with NEXT
20:37 - Ecological psychology and grid cells
22:06 - Why irreconcilable?
28:59 - Why hasn't ecological psychology evolved more?
47:13 - NExT
49:10 - Hypothesis 1
55:45 - Hypothesis 2
1:02:55 - Artificial intelligence and ecological psychology
1:16:33 - Manifolds
1:31:20 - Hypothesis 4: Body, low-D, Synergies
1:35:53 - Hypothesis 5: Mind emerges
1:36:23 - Hypothesis 6:]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2024/07/190-Luis-Favela-website.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:41:03</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Luis Favela is an Associate Professor at Indiana University Bloomington. He is part philosopher, part cognitive scientist, part many things, and on this episode we discuss his new book, The Ecological Brain: Unifying the Sciences of Brain, Body, and Environment.





In the book, Louie presents his NeuroEcological Nexus Theory, or NExT, which, as the subtitle says, proposes a way forward to tie together our brains, our bodies, and the environment; namely it has a lot to do with the complexity sciences and manifolds, which we discuss. But the book doesn't just present his theory. Among other things, it presents a rich historical look into why ecological psychology and neuroscience haven't been exactly friendly over the years, in terms of how to explain our behaviors, the role of brains in those explanations, how to think about what minds are, and so on. And it suggests how the two fields can g]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2024/07/190-Luis-Favela-website.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 189 Joshua Vogelstein: Connectomes and Prospective Learning</title>
	<link>https://braininspired.co/podcast/189/</link>
	<pubDate>Sat, 29 Jun 2024 12:40:06 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2338</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Jovo, as you'll learn, is theoretically oriented, and enjoys the formalism of mathematics to approach questions that begin with a sense of wonder. So after I learn more about his overall approach, the first topic we discuss is the world's currently largest map of an entire brain... the connectome of an insect, the fruit fly. We talk about his role in this collaborative effort, what the heck a connectome is, why it's useful and what to do with it, and so on.</p>



<p>The second main topic we discuss is his theoretical work on what his team has called prospective learning. Prospective learning differs in a fundamental way from the vast majority of AI these days, which they call retrospective learning. So we discuss what prospective learning is, and how it may improve AI moving forward.</p>



<p>At some point there's a little audio/video sync issues crop up, so we switched to another recording method and fixed it... so just hang tight if you're viewing the podcast... it'll get better soon.</p>



<p>0:00 - Intro
05:25 - Jovo's approach
13:10 - Connectome of a fruit fly
26:39 - What to do with a connectome
37:04 - How important is a connectome?
51:48 - Prospective learning
1:15:20 - Efficiency
1:17:38 - AI doomerism</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Jovo, as youll learn, is theoretically oriented, and enjoys the formalism of mathematics to approach questions that begin with a sense of wonder. So after I lea]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Jovo, as you'll learn, is theoretically oriented, and enjoys the formalism of mathematics to approach questions that begin with a sense of wonder. So after I learn more about his overall approach, the first topic we discuss is the world's currently largest map of an entire brain... the connectome of an insect, the fruit fly. We talk about his role in this collaborative effort, what the heck a connectome is, why it's useful and what to do with it, and so on.</p>



<p>The second main topic we discuss is his theoretical work on what his team has called prospective learning. Prospective learning differs in a fundamental way from the vast majority of AI these days, which they call retrospective learning. So we discuss what prospective learning is, and how it may improve AI moving forward.</p>



<p>At some point there's a little audio/video sync issues crop up, so we switched to another recording method and fixed it... so just hang tight if you're viewing the podcast... it'll get better soon.</p>



<p>0:00 - Intro
05:25 - Jovo's approach
13:10 - Connectome of a fruit fly
26:39 - What to do with a connectome
37:04 - How important is a connectome?
51:48 - Prospective learning
1:15:20 - Efficiency
1:17:38 - AI doomerism</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2338/189.mp3" length="84746049" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Jovo, as you'll learn, is theoretically oriented, and enjoys the formalism of mathematics to approach questions that begin with a sense of wonder. So after I learn more about his overall approach, the first topic we discuss is the world's currently largest map of an entire brain... the connectome of an insect, the fruit fly. We talk about his role in this collaborative effort, what the heck a connectome is, why it's useful and what to do with it, and so on.



The second main topic we discuss is his theoretical work on what his team has called prospective learning. Prospective learning differs in a fundamental way from the vast majority of AI these days, which they call retrospective learning. So we discuss what prospective learning is, and how it may improve AI moving forward.



At some point there's a little audio/video sync issues crop up, so we switched to another recording method and fixed it... so just hang tight if you're viewing the podcast... it'll get better soon.



0:00 - Intro
05:25 - Jovo's approach
13:10 - Connectome of a fruit fly
26:39 - What to do with a connectome
37:04 - How important is a connectome?
51:48 - Prospective learning
1:15:20 - Efficiency
1:17:38 - AI doomerism]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2024/06/thumb-website.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:27:19</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Jovo, as you'll learn, is theoretically oriented, and enjoys the formalism of mathematics to approach questions that begin with a sense of wonder. So after I learn more about his overall approach, the first topic we discuss is the world's currently largest map of an entire brain... the connectome of an insect, the fruit fly. We talk about his role in this collaborative effort, what the heck a connectome is, why it's useful and what to do with it, and so on.



The second main topic we discuss is his theoretical work on what his team has called prospective learning. Prospective learning differs in a fundamental way from the vast majority of AI these days, which they call retrospective learning. So we discuss what prospective learning is, and how it may improve AI moving forward.



At some point there's a little audio/video sync issues crop up, so we switched to another recording method and fi]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2024/06/thumb-website.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 188 Jolande Fooken: Coordinating Action and Perception</title>
	<link>https://braininspired.co/podcast/188/</link>
	<pubDate>Mon, 27 May 2024 15:56:17 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2335</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>Jolande Fooken is a post-postdoctoral researcher interested in how we move our eyes and move our hands together to accomplish naturalistic tasks. Hand-eye coordination is one of those things that sounds simple and we do it all the time to make meals for our children day in, and day out, and day in, and day out. But it becomes way less seemingly simple as soon as you learn how we make various kinds of eye movements, and how we make various kinds of hand movements, and use various strategies to do repeated tasks. And like everything in the brain sciences, it's something we don't have a perfect story for yet. So, Jolande and I discuss her work, and thoughts, and ideas around those and related topics.</p>



<ul class="wp-block-list">
<li><a href="https://ookenfooken.github.io/">Jolande's website</a>.</li>



<li>Twitter: <a href="https://twitter.com/Ookenfooken">@ookenfooken</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://thefemalescientist.com/article/jolande-fooken/3262/i-am-a-parent-i-am-a-scientist/">I am a parent. I am a scientist</a>.</li>



<li><a href="https://ookenfooken.github.io/files/FookenEtal.JoV.2016.pdf">Eye movement accuracy determines natural interception strategies</a>.</li>



<li><a href="https://ookenfooken.github.io/files/FookenEtAl.JNeurosci.2023.pdf">Perceptual-cognitive integration for goal-directed action in naturalistic environments</a>.</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
3:27 - Eye movements
8:53 - Hand-eye coordination
9:30 - Hand-eye coordination and naturalistic tasks
26:45 - Levels of expertise
34:02 - Yarbus and eye movements
42:13 - Varieties of experimental paradigms, varieties of viewing the brain
52:46 - Career vision
1:04:07 - Evolving view about the brain
1:10:49 - Coordination, robots, and AI</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











Jolande Fooken is a post-postdoctoral researcher interested in how we move our eyes and move our hands together to accomplish naturalistic tasks. Hand-eye coo]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>Jolande Fooken is a post-postdoctoral researcher interested in how we move our eyes and move our hands together to accomplish naturalistic tasks. Hand-eye coordination is one of those things that sounds simple and we do it all the time to make meals for our children day in, and day out, and day in, and day out. But it becomes way less seemingly simple as soon as you learn how we make various kinds of eye movements, and how we make various kinds of hand movements, and use various strategies to do repeated tasks. And like everything in the brain sciences, it's something we don't have a perfect story for yet. So, Jolande and I discuss her work, and thoughts, and ideas around those and related topics.</p>



<ul class="wp-block-list">
<li><a href="https://ookenfooken.github.io/">Jolande's website</a>.</li>



<li>Twitter: <a href="https://twitter.com/Ookenfooken">@ookenfooken</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://thefemalescientist.com/article/jolande-fooken/3262/i-am-a-parent-i-am-a-scientist/">I am a parent. I am a scientist</a>.</li>



<li><a href="https://ookenfooken.github.io/files/FookenEtal.JoV.2016.pdf">Eye movement accuracy determines natural interception strategies</a>.</li>



<li><a href="https://ookenfooken.github.io/files/FookenEtAl.JNeurosci.2023.pdf">Perceptual-cognitive integration for goal-directed action in naturalistic environments</a>.</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
3:27 - Eye movements
8:53 - Hand-eye coordination
9:30 - Hand-eye coordination and naturalistic tasks
26:45 - Levels of expertise
34:02 - Yarbus and eye movements
42:13 - Varieties of experimental paradigms, varieties of viewing the brain
52:46 - Career vision
1:04:07 - Evolving view about the brain
1:10:49 - Coordination, robots, and AI</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2335/188.mp3" length="85683562" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











Jolande Fooken is a post-postdoctoral researcher interested in how we move our eyes and move our hands together to accomplish naturalistic tasks. Hand-eye coordination is one of those things that sounds simple and we do it all the time to make meals for our children day in, and day out, and day in, and day out. But it becomes way less seemingly simple as soon as you learn how we make various kinds of eye movements, and how we make various kinds of hand movements, and use various strategies to do repeated tasks. And like everything in the brain sciences, it's something we don't have a perfect story for yet. So, Jolande and I discuss her work, and thoughts, and ideas around those and related topics.




Jolande's website.



Twitter: @ookenfooken.



Related papers

I am a parent. I am a scientist.



Eye movement accuracy determines natural interception strategies.



Perceptual-cognitive integration for goal-directed action in naturalistic environments.






0:00 - Intro
3:27 - Eye movements
8:53 - Hand-eye coordination
9:30 - Hand-eye coordination and naturalistic tasks
26:45 - Levels of expertise
34:02 - Yarbus and eye movements
42:13 - Varieties of experimental paradigms, varieties of viewing the brain
52:46 - Career vision
1:04:07 - Evolving view about the brain
1:10:49 - Coordination, robots, and AI]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2024/05/thumb-website.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:28:14</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











Jolande Fooken is a post-postdoctoral researcher interested in how we move our eyes and move our hands together to accomplish naturalistic tasks. Hand-eye coordination is one of those things that sounds simple and we do it all the time to make meals for our children day in, and day out, and day in, and day out. But it becomes way less seemingly simple as soon as you learn how we make various kinds of eye movements, and how we make various kinds of hand movements, and use various strategies to do repeated tasks. And like everything in the brain sciences, it's something we don't have a perfect story for yet. So, Jolande and I discuss her work, and thoughts, and ideas around those and related topics.




Jolande's website.



Twitter: @ookenfooken.



Related papers

I am a parent. I am a scientist.



Eye movement accuracy determines natural interception strategies.



Perceptual-cognitive in]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2024/05/thumb-website.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 187: COSYNE 2024 Neuro-AI Panel</title>
	<link>https://braininspired.co/podcast/187/</link>
	<pubDate>Sat, 20 Apr 2024 16:27:27 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2333</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Recently I was invited to moderate a panel at the annual Computational and Systems Neuroscience, or COSYNE, conference. This year was the 20th anniversary of COSYNE, and we were in Lisbon Porturgal. The panel goal was to discuss the relationship between neuroscience and AI. The panelists were Tony Zador, Alex Pouget, Blaise Aguera y Arcas, Kim Stachenfeld, Jonathan Pillow, and Eva Dyer. And I'll let them introduce themselves soon. Two of the panelists, Tony and Alex, co-founded COSYNE those 20 years ago, and they continue to have different views about the neuro-AI relationship. Tony has been on the podcast before and will return soon, and I'll also have Kim Stachenfeld on in a couple episodes. I think this was a fun discussion, and I hope you enjoy it. There's plenty of back and forth, a wide range of opinions, and some criticism from one of the audience questioners. This is an edited audio version, to remove long dead space and such. There's about 30 minutes of just panel, then the panel starts fielding questions from the audience. </p>



<ul class="wp-block-list">
<li><a href="https://www.cosyne.org/">COSYNE</a>.</li>
</ul>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Recently I was invited to moderate a panel at the annual Computational and Systems Neuroscience, or COSYNE, conference. This year was the 20th anniversary of CO]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Recently I was invited to moderate a panel at the annual Computational and Systems Neuroscience, or COSYNE, conference. This year was the 20th anniversary of COSYNE, and we were in Lisbon Porturgal. The panel goal was to discuss the relationship between neuroscience and AI. The panelists were Tony Zador, Alex Pouget, Blaise Aguera y Arcas, Kim Stachenfeld, Jonathan Pillow, and Eva Dyer. And I'll let them introduce themselves soon. Two of the panelists, Tony and Alex, co-founded COSYNE those 20 years ago, and they continue to have different views about the neuro-AI relationship. Tony has been on the podcast before and will return soon, and I'll also have Kim Stachenfeld on in a couple episodes. I think this was a fun discussion, and I hope you enjoy it. There's plenty of back and forth, a wide range of opinions, and some criticism from one of the audience questioners. This is an edited audio version, to remove long dead space and such. There's about 30 minutes of just panel, then the panel starts fielding questions from the audience. </p>



<ul class="wp-block-list">
<li><a href="https://www.cosyne.org/">COSYNE</a>.</li>
</ul>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2333/187.mp3" length="61333340" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Recently I was invited to moderate a panel at the annual Computational and Systems Neuroscience, or COSYNE, conference. This year was the 20th anniversary of COSYNE, and we were in Lisbon Porturgal. The panel goal was to discuss the relationship between neuroscience and AI. The panelists were Tony Zador, Alex Pouget, Blaise Aguera y Arcas, Kim Stachenfeld, Jonathan Pillow, and Eva Dyer. And I'll let them introduce themselves soon. Two of the panelists, Tony and Alex, co-founded COSYNE those 20 years ago, and they continue to have different views about the neuro-AI relationship. Tony has been on the podcast before and will return soon, and I'll also have Kim Stachenfeld on in a couple episodes. I think this was a fun discussion, and I hope you enjoy it. There's plenty of back and forth, a wide range of opinions, and some criticism from one of the audience questioners. This is an edited audio version, to remove long dead space and such. There's about 30 minutes of just panel, then the panel starts fielding questions from the audience. 




COSYNE.]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2024/04/Artboard-website.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:03:35</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Recently I was invited to moderate a panel at the annual Computational and Systems Neuroscience, or COSYNE, conference. This year was the 20th anniversary of COSYNE, and we were in Lisbon Porturgal. The panel goal was to discuss the relationship between neuroscience and AI. The panelists were Tony Zador, Alex Pouget, Blaise Aguera y Arcas, Kim Stachenfeld, Jonathan Pillow, and Eva Dyer. And I'll let them introduce themselves soon. Two of the panelists, Tony and Alex, co-founded COSYNE those 20 years ago, and they continue to have different views about the neuro-AI relationship. Tony has been on the podcast before and will return soon, and I'll also have Kim Stachenfeld on in a couple episodes. I think this was a fun discussion, and I hope you enjoy it. There's plenty of back and forth, a wide range of opinions, and some criticism from one of the audience questioners. This is an edited audio v]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2024/04/Artboard-website.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 186 Mazviita Chirimuuta: The Brain Abstracted</title>
	<link>https://braininspired.co/podcast/186/</link>
	<pubDate>Mon, 25 Mar 2024 22:39:36 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2329</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>Mazviita Chirimuuta is a philosopher at the University of Edinburgh. Today we discuss topics from her new book, <a href="https://mitpress.mit.edu/9780262548045/the-brain-abstracted/">The Brain Abstracted: Simplification in the History and Philosophy of Neuroscience</a>. </p>



<p>She largely argues that when we try to understand something complex, like the brain, using models, and math, and analogies, for example - we should keep in mind these are all ways of simplifying and abstracting away details to give us something we actually can understand. And, when we do science, every tool we use and perspective we bring, every way we try to attack a problem, these are all both necessary to do the science and limit the interpretation we can claim from our results. She does all this and more by exploring many topics in neuroscience and philosophy throughout the book, many of which we discuss today.</p>





<ul class="wp-block-list">
<li><a href="https://www.ed.ac.uk/profile/mazviita-chirimuuta">Mazviita's University of Edinburgh page</a>.</li>



<li><a href="https://mitpress.mit.edu/9780262548045/the-brain-abstracted/">The Brain Abstracted: Simplification in the History and Philosophy of Neuroscience</a>.</li>



<li>Previous Brain Inspired episodes:
<ul class="wp-block-list">
<li><a href="https://braininspired.co/podcast/72/">BI 072 Mazviita Chirimuuta: Understanding, Prediction, and Reality</a></li>



<li><a href="https://braininspired.co/podcast/114/">BI 114 Mark Sprevak and Mazviita Chirimuuta: Computation and the Mind</a></li>
</ul>
</li>
</ul>



<p>0:00 - Intro
5:28 - Neuroscience to philosophy
13:39 - Big themes of the book
27:44 - Simplifying by mathematics
32:19 - Simplifying by reduction
42:55 - Simplification by analogy
46:33 - Technology precedes science
55:04 - Theory, technology, and understanding
58:04 - Cross-disciplinary progress
58:45 - Complex vs. simple(r) systems
1:08:07 - Is science bound to study stability?
1:13:20 - 4E for philosophy but not neuroscience?
1:28:50 - ANNs as models
1:38:38 - Study of mind</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











Mazviita Chirimuuta is a philosopher at the University of Edinburgh. Today we discuss topics from her new book, The Brain Abstracted: Simplification in the Hi]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>Mazviita Chirimuuta is a philosopher at the University of Edinburgh. Today we discuss topics from her new book, <a href="https://mitpress.mit.edu/9780262548045/the-brain-abstracted/">The Brain Abstracted: Simplification in the History and Philosophy of Neuroscience</a>. </p>



<p>She largely argues that when we try to understand something complex, like the brain, using models, and math, and analogies, for example - we should keep in mind these are all ways of simplifying and abstracting away details to give us something we actually can understand. And, when we do science, every tool we use and perspective we bring, every way we try to attack a problem, these are all both necessary to do the science and limit the interpretation we can claim from our results. She does all this and more by exploring many topics in neuroscience and philosophy throughout the book, many of which we discuss today.</p>





<ul class="wp-block-list">
<li><a href="https://www.ed.ac.uk/profile/mazviita-chirimuuta">Mazviita's University of Edinburgh page</a>.</li>



<li><a href="https://mitpress.mit.edu/9780262548045/the-brain-abstracted/">The Brain Abstracted: Simplification in the History and Philosophy of Neuroscience</a>.</li>



<li>Previous Brain Inspired episodes:
<ul class="wp-block-list">
<li><a href="https://braininspired.co/podcast/72/">BI 072 Mazviita Chirimuuta: Understanding, Prediction, and Reality</a></li>



<li><a href="https://braininspired.co/podcast/114/">BI 114 Mark Sprevak and Mazviita Chirimuuta: Computation and the Mind</a></li>
</ul>
</li>
</ul>



<p>0:00 - Intro
5:28 - Neuroscience to philosophy
13:39 - Big themes of the book
27:44 - Simplifying by mathematics
32:19 - Simplifying by reduction
42:55 - Simplification by analogy
46:33 - Technology precedes science
55:04 - Theory, technology, and understanding
58:04 - Cross-disciplinary progress
58:45 - Complex vs. simple(r) systems
1:08:07 - Is science bound to study stability?
1:13:20 - 4E for philosophy but not neuroscience?
1:28:50 - ANNs as models
1:38:38 - Study of mind</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2329/186.mp3" length="100729239" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











Mazviita Chirimuuta is a philosopher at the University of Edinburgh. Today we discuss topics from her new book, The Brain Abstracted: Simplification in the History and Philosophy of Neuroscience. 



She largely argues that when we try to understand something complex, like the brain, using models, and math, and analogies, for example - we should keep in mind these are all ways of simplifying and abstracting away details to give us something we actually can understand. And, when we do science, every tool we use and perspective we bring, every way we try to attack a problem, these are all both necessary to do the science and limit the interpretation we can claim from our results. She does all this and more by exploring many topics in neuroscience and philosophy throughout the book, many of which we discuss today.






Mazviita's University of Edinburgh page.



The Brain Abstracted: Simplification in the History and Philosophy of Neuroscience.



Previous Brain Inspired episodes:

BI 072 Mazviita Chirimuuta: Understanding, Prediction, and Reality



BI 114 Mark Sprevak and Mazviita Chirimuuta: Computation and the Mind






0:00 - Intro
5:28 - Neuroscience to philosophy
13:39 - Big themes of the book
27:44 - Simplifying by mathematics
32:19 - Simplifying by reduction
42:55 - Simplification by analogy
46:33 - Technology precedes science
55:04 - Theory, technology, and understanding
58:04 - Cross-disciplinary progress
58:45 - Complex vs. simple(r) systems
1:08:07 - Is science bound to study stability?
1:13:20 - 4E for philosophy but not neuroscience?
1:28:50 - ANNs as models
1:38:38 - Study of mind]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2024/03/thumb-website-1.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:43:34</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











Mazviita Chirimuuta is a philosopher at the University of Edinburgh. Today we discuss topics from her new book, The Brain Abstracted: Simplification in the History and Philosophy of Neuroscience. 



She largely argues that when we try to understand something complex, like the brain, using models, and math, and analogies, for example - we should keep in mind these are all ways of simplifying and abstracting away details to give us something we actually can understand. And, when we do science, every tool we use and perspective we bring, every way we try to attack a problem, these are all both necessary to do the science and limit the interpretation we can claim from our results. She does all this and more by exploring many topics in neuroscience and philosophy throughout the book, many of which we discuss today.






Mazviita's University of Edinburgh page.



The Brain Abstracted: Simplifi]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2024/03/thumb-website-1.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 185 Eric Yttri: Orchestrating Behavior</title>
	<link>https://braininspired.co/podcast/185/</link>
	<pubDate>Wed, 06 Mar 2024 14:56:51 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2326</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>As some of you know, I recently got back into the research world, and in particular I work in Eric Yttris' lab at Carnegie Mellon University.</p>



<p>Eric's lab studies the relationship between various kinds of behaviors and the neural activity in a few areas known to be involved in enacting and shaping those behaviors, namely the motor cortex and basal ganglia.&nbsp; And study that, he uses tools like optogentics, neuronal recordings, and stimulations, while mice perform certain tasks, or, in my case, while they freely behave wandering around an enclosed space.</p>



<p>We talk about how Eric got here, how and why the motor cortex and basal ganglia are still mysteries despite lots of theories and experimental work, Eric's work on trying to solve those mysteries using both trained tasks and more naturalistic behavior. We talk about the valid question, "What is a behavior?", and lots more.</p>



<p><a href="https://labs.bio.cmu.edu/yttri/">Yttri Lab</a></p>



<ul class="wp-block-list">
<li>Twitter:&nbsp;<a href="https://twitter.com/YttriLab">@YttriLab</a></li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://pubmed.ncbi.nlm.nih.gov/27135927/">Opponent and bidirectional control of movement velocity in the basal ganglia</a>.</li>



<li><a href="https://www.nature.com/articles/s41467-021-25420-x?proof=tr">B-SOiD, an open-source unsupervised algorithm for identification and fast prediction of behaviors</a>.</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
2:36 - Eric's background
14:47 - Different animal models
17:59 - ANNs as models for animal brains
24:34 - Main question
25:43 - How circuits produce appropriate behaviors
26:10 - Cerebellum
27:49 - What do motor cortex and basal ganglia do?
49:12 - Neuroethology
1:06:09 - What is a behavior?
1:11:18 - Categorize behavior (B-SOiD)
1:22:01 - Real behavior vs. ANNs
1:33:09 - Best era in neuroscience</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











As some of you know, I recently got back into the research world, and in particular I work in Eric Yttris lab at Carnegie Mellon University.



Erics lab stud]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>As some of you know, I recently got back into the research world, and in particular I work in Eric Yttris' lab at Carnegie Mellon University.</p>



<p>Eric's lab studies the relationship between various kinds of behaviors and the neural activity in a few areas known to be involved in enacting and shaping those behaviors, namely the motor cortex and basal ganglia.&nbsp; And study that, he uses tools like optogentics, neuronal recordings, and stimulations, while mice perform certain tasks, or, in my case, while they freely behave wandering around an enclosed space.</p>



<p>We talk about how Eric got here, how and why the motor cortex and basal ganglia are still mysteries despite lots of theories and experimental work, Eric's work on trying to solve those mysteries using both trained tasks and more naturalistic behavior. We talk about the valid question, "What is a behavior?", and lots more.</p>



<p><a href="https://labs.bio.cmu.edu/yttri/">Yttri Lab</a></p>



<ul class="wp-block-list">
<li>Twitter:&nbsp;<a href="https://twitter.com/YttriLab">@YttriLab</a></li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://pubmed.ncbi.nlm.nih.gov/27135927/">Opponent and bidirectional control of movement velocity in the basal ganglia</a>.</li>



<li><a href="https://www.nature.com/articles/s41467-021-25420-x?proof=tr">B-SOiD, an open-source unsupervised algorithm for identification and fast prediction of behaviors</a>.</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
2:36 - Eric's background
14:47 - Different animal models
17:59 - ANNs as models for animal brains
24:34 - Main question
25:43 - How circuits produce appropriate behaviors
26:10 - Cerebellum
27:49 - What do motor cortex and basal ganglia do?
49:12 - Neuroethology
1:06:09 - What is a behavior?
1:11:18 - Categorize behavior (B-SOiD)
1:22:01 - Real behavior vs. ANNs
1:33:09 - Best era in neuroscience</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2326/185.mp3" length="101900995" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











As some of you know, I recently got back into the research world, and in particular I work in Eric Yttris' lab at Carnegie Mellon University.



Eric's lab studies the relationship between various kinds of behaviors and the neural activity in a few areas known to be involved in enacting and shaping those behaviors, namely the motor cortex and basal ganglia.&nbsp; And study that, he uses tools like optogentics, neuronal recordings, and stimulations, while mice perform certain tasks, or, in my case, while they freely behave wandering around an enclosed space.



We talk about how Eric got here, how and why the motor cortex and basal ganglia are still mysteries despite lots of theories and experimental work, Eric's work on trying to solve those mysteries using both trained tasks and more naturalistic behavior. We talk about the valid question, "What is a behavior?", and lots more.



Yttri Lab




Twitter:&nbsp;@YttriLab



Related papers

Opponent and bidirectional control of movement velocity in the basal ganglia.



B-SOiD, an open-source unsupervised algorithm for identification and fast prediction of behaviors.






0:00 - Intro
2:36 - Eric's background
14:47 - Different animal models
17:59 - ANNs as models for animal brains
24:34 - Main question
25:43 - How circuits produce appropriate behaviors
26:10 - Cerebellum
27:49 - What do motor cortex and basal ganglia do?
49:12 - Neuroethology
1:06:09 - What is a behavior?
1:11:18 - Categorize behavior (B-SOiD)
1:22:01 - Real behavior vs. ANNs
1:33:09 - Best era in neuroscience]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2024/03/thumb-website.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:44:50</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











As some of you know, I recently got back into the research world, and in particular I work in Eric Yttris' lab at Carnegie Mellon University.



Eric's lab studies the relationship between various kinds of behaviors and the neural activity in a few areas known to be involved in enacting and shaping those behaviors, namely the motor cortex and basal ganglia.&nbsp; And study that, he uses tools like optogentics, neuronal recordings, and stimulations, while mice perform certain tasks, or, in my case, while they freely behave wandering around an enclosed space.



We talk about how Eric got here, how and why the motor cortex and basal ganglia are still mysteries despite lots of theories and experimental work, Eric's work on trying to solve those mysteries using both trained tasks and more naturalistic behavior. We talk about the valid question, "What is a behavior?", and lots more.



Yttri Lab]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2024/03/thumb-website.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 184 Peter Stratton: Synthesize Neural Principles</title>
	<link>https://braininspired.co/podcast/184/</link>
	<pubDate>Tue, 20 Feb 2024 02:35:15 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2323</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Peter Stratton is a research scientist at Queensland University of Technology.</p>





<p>I was pointed toward Pete by a patreon supporter, who sent me a sort of perspective piece Pete wrote that is the main focus of our conversation, although we also talk about some of his work in particular - for example, he works with spiking neural networks, like my last guest, <a href="https://braininspired.co/podcast/183/">Dan Goodman</a>.</p>



<p>What Pete argues for is what he calls a sideways-in approach. So a bottom-up approach is to build things like we find them in the brain, put them together, and voila, we'll get cognition. A top-down approach, the current approach in AI, is to train a system to perform a task, give it some algorithms to run, and fiddle with the architecture and lower level details until you pass your favorite benchmark test. Pete is focused more on the principles of computation brains employ that current AI doesn't. If you're familiar with David Marr, this is akin to his so-called "algorithmic level", but it's between that and the "implementation level", I'd say. Because Pete is focused on the synthesis of different kinds of brain operations - how they intermingle to perform computations and produce emergent properties. So he thinks more like a systems neuroscientist in that respect. Figuring that out is figuring out how to make better AI, Pete says. So we discuss a handful of those principles, all through the lens of how challenging a task it is to synthesize multiple principles into a coherent functioning whole (as opposed to a collection of parts). Buy, hey, evolution did it, so I'm sure we can, too, right?</p>



<ul class="wp-block-list">
<li><a href="http://neuro-ai.info/index.html">Peter's website</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://link.springer.com/article/10.1007/s12559-023-10181-0">Convolutionary, Evolutionary, and Revolutionary: What’s Next for Brains, Bodies, and AI?</a></li>



<li><a href="https://arxiv.org/abs/2208.01204">Making a Spiking Net Work: Robust brain-like unsupervised machine learning</a>.</li>



<li><a href="https://www.frontiersin.org/articles/10.3389/fnsys.2015.00119/full">Global segregation of cortical activity and metastable dynamics</a>.</li>



<li><a href="https://physoc.onlinelibrary.wiley.com/doi/am-pdf/10.1113/jp271444">Unlocking neural complexity with a robotic key</a></li>
</ul>
</li>
</ul>



<p>0:00 - Intro
3:50 - AI background, neuroscience principles
8:00 - Overall view of modern AI
14:14 - Moravec's paradox and robotics
20:50 -Understanding movement to understand cognition
30:01 - How close are we to understanding brains/minds?
32:17 - Pete's goal
34:43 - Principles from neuroscience to build AI
42:39 - Levels of abstraction and implementation
49:57 - Mental disorders and robustness
55:58 - Function vs. implementation
1:04:04 - Spiking networks
1:07:57 - The roadmap
1:19:10 - AGI
1:23:48 - The terms AGI and AI
1:26:12 - Consciousness</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Peter Stratton is a research scientist at Queensland University of Technology.





I was pointed toward Pete by a patreon supporter, who sent me a sort of pers]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Peter Stratton is a research scientist at Queensland University of Technology.</p>





<p>I was pointed toward Pete by a patreon supporter, who sent me a sort of perspective piece Pete wrote that is the main focus of our conversation, although we also talk about some of his work in particular - for example, he works with spiking neural networks, like my last guest, <a href="https://braininspired.co/podcast/183/">Dan Goodman</a>.</p>



<p>What Pete argues for is what he calls a sideways-in approach. So a bottom-up approach is to build things like we find them in the brain, put them together, and voila, we'll get cognition. A top-down approach, the current approach in AI, is to train a system to perform a task, give it some algorithms to run, and fiddle with the architecture and lower level details until you pass your favorite benchmark test. Pete is focused more on the principles of computation brains employ that current AI doesn't. If you're familiar with David Marr, this is akin to his so-called "algorithmic level", but it's between that and the "implementation level", I'd say. Because Pete is focused on the synthesis of different kinds of brain operations - how they intermingle to perform computations and produce emergent properties. So he thinks more like a systems neuroscientist in that respect. Figuring that out is figuring out how to make better AI, Pete says. So we discuss a handful of those principles, all through the lens of how challenging a task it is to synthesize multiple principles into a coherent functioning whole (as opposed to a collection of parts). Buy, hey, evolution did it, so I'm sure we can, too, right?</p>



<ul class="wp-block-list">
<li><a href="http://neuro-ai.info/index.html">Peter's website</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://link.springer.com/article/10.1007/s12559-023-10181-0">Convolutionary, Evolutionary, and Revolutionary: What’s Next for Brains, Bodies, and AI?</a></li>



<li><a href="https://arxiv.org/abs/2208.01204">Making a Spiking Net Work: Robust brain-like unsupervised machine learning</a>.</li>



<li><a href="https://www.frontiersin.org/articles/10.3389/fnsys.2015.00119/full">Global segregation of cortical activity and metastable dynamics</a>.</li>



<li><a href="https://physoc.onlinelibrary.wiley.com/doi/am-pdf/10.1113/jp271444">Unlocking neural complexity with a robotic key</a></li>
</ul>
</li>
</ul>



<p>0:00 - Intro
3:50 - AI background, neuroscience principles
8:00 - Overall view of modern AI
14:14 - Moravec's paradox and robotics
20:50 -Understanding movement to understand cognition
30:01 - How close are we to understanding brains/minds?
32:17 - Pete's goal
34:43 - Principles from neuroscience to build AI
42:39 - Levels of abstraction and implementation
49:57 - Mental disorders and robustness
55:58 - Function vs. implementation
1:04:04 - Spiking networks
1:07:57 - The roadmap
1:19:10 - AGI
1:23:48 - The terms AGI and AI
1:26:12 - Consciousness</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2323/184.mp3" length="88496242" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Peter Stratton is a research scientist at Queensland University of Technology.





I was pointed toward Pete by a patreon supporter, who sent me a sort of perspective piece Pete wrote that is the main focus of our conversation, although we also talk about some of his work in particular - for example, he works with spiking neural networks, like my last guest, Dan Goodman.



What Pete argues for is what he calls a sideways-in approach. So a bottom-up approach is to build things like we find them in the brain, put them together, and voila, we'll get cognition. A top-down approach, the current approach in AI, is to train a system to perform a task, give it some algorithms to run, and fiddle with the architecture and lower level details until you pass your favorite benchmark test. Pete is focused more on the principles of computation brains employ that current AI doesn't. If you're familiar with David Marr, this is akin to his so-called "algorithmic level", but it's between that and the "implementation level", I'd say. Because Pete is focused on the synthesis of different kinds of brain operations - how they intermingle to perform computations and produce emergent properties. So he thinks more like a systems neuroscientist in that respect. Figuring that out is figuring out how to make better AI, Pete says. So we discuss a handful of those principles, all through the lens of how challenging a task it is to synthesize multiple principles into a coherent functioning whole (as opposed to a collection of parts). Buy, hey, evolution did it, so I'm sure we can, too, right?




Peter's website.



Related papers

Convolutionary, Evolutionary, and Revolutionary: What’s Next for Brains, Bodies, and AI?



Making a Spiking Net Work: Robust brain-like unsupervised machine learning.



Global segregation of cortical activity and metastable dynamics.



Unlocking neural complexity with a robotic key






0:00 - Intro
3:50 - AI background, neuroscience principles
8:00 - Overall view of modern AI
14:14 - Moravec's paradox and robotics
20:50 -Understanding movement to understand cognition
30:01 - How close are we to understanding brains/minds?
32:17 - Pete's goal
34:43 - Principles from neuroscience to build AI
42:39 - Levels of abstraction and implementation
49:57 - Mental disorders and robustness
55:58 - Function vs. implementation
1:04:04 - Spiking networks
1:07:57 - The roadmap
1:19:10 - AGI
1:23:48 - The terms AGI and AI
1:26:12 - Consciousness]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2024/02/thumb-website-1.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:30:47</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Peter Stratton is a research scientist at Queensland University of Technology.





I was pointed toward Pete by a patreon supporter, who sent me a sort of perspective piece Pete wrote that is the main focus of our conversation, although we also talk about some of his work in particular - for example, he works with spiking neural networks, like my last guest, Dan Goodman.



What Pete argues for is what he calls a sideways-in approach. So a bottom-up approach is to build things like we find them in the brain, put them together, and voila, we'll get cognition. A top-down approach, the current approach in AI, is to train a system to perform a task, give it some algorithms to run, and fiddle with the architecture and lower level details until you pass your favorite benchmark test. Pete is focused more on the principles of computation brains employ that current AI doesn't. If you're familiar with]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2024/02/thumb-website-1.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 183 Dan Goodman: Neural Reckoning</title>
	<link>https://braininspired.co/podcast/183/</link>
	<pubDate>Tue, 06 Feb 2024 23:57:15 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2320</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>You may know my guest as the co-founder of Neuromatch, the excellent online computational neuroscience academy, or as the creator of the Brian spiking neural network simulator, which is freely available. I know him as a spiking neural network practitioner extraordinaire. Dan Goodman runs the Neural Reckoning Group at Imperial College London, where they use spiking neural networks to figure out how biological and artificial brains reckon, or compute.</p>



<p>All of the current AI we use to do all the impressive things we do, essentially all of it, is built on artificial neural networks. Notice the word "neural" there. That word is meant to communicate that these artificial networks do stuff the way our brains do stuff. And indeed, if you take a few steps back, spin around 10 times, take a few shots of whiskey, and squint hard enough, there is a passing resemblance. One thing you'll probably still notice, in your drunken stupor, is that, among the thousand ways ANNs differ from brains, is that they don't use action potentials, or spikes. From the perspective of neuroscience, that can seem mighty curious. Because, for decades now, neuroscience has focused on spikes as <em>the</em> things that make our cognition tick.</p>



<p>We count them and compare them in different conditions, and generally put a lot of stock in their usefulness in brains.</p>



<p>So what does it mean that modern neural networks disregard spiking altogether?</p>



<p>Maybe spiking really isn't important to process and transmit information as well as our brains do. Or maybe spiking is one among many ways for intelligent systems to function well. Dan shares some of what he's learned and how he thinks about spiking and SNNs and a host of other topics.</p>



<ul class="wp-block-list">
<li><a href="https://neural-reckoning.org/">Neural Reckoning Group</a>.</li>



<li>Twitter:&nbsp;<a href="https://twitter.com/neuralreckoning">@neuralreckoning</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.nature.com/articles/s41467-021-26022-3.pdf">Neural heterogeneity promotes robust learning</a>.</li>



<li><a href="https://arxiv.org/abs/2106.02626">Dynamics of specialization in neural modules under resource constraints</a>.</li>



<li><a href="https://neural-reckoning.org/pub_multimodal.html">Multimodal units fuse-then-accumulate evidence across channels</a>.</li>



<li><a href="https://www.dropbox.com/s/942rf97l80wyya5/snufa-meeting-report.pdf?dl=1">Visualizing a joint future of neuroscience and neuromorphic engineering</a>.</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
3:47 - Why spiking neural networks, and a mathematical background
13:16 - Efficiency
17:36 - Machine learning for neuroscience
19:38 - Why not jump ship from SNNs?
23:35 - Hard and easy tasks
29:20 - How brains and nets learn
32:50 - Exploratory vs. theory-driven science
37:32 - Static vs. dynamic
39:06 - Heterogeneity
46:01 - Unifying principles vs. a hodgepodge
50:37 - Sparsity
58:05 - Specialization and modularity
1:00:51 - Naturalistic experiments
1:03:41 - Projects for SNN research
1:05:09 - The right level of abstraction
1:07:58 - Obstacles to progress
1:12:30 - Levels of explanation
1:14:51 - What has AI taught neuroscience?
1:22:06 - How has neuroscience helped AI?</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











You may know my guest as the co-founder of Neuromatch, the excellent online computational neuroscience academy, or as the creator of the Brian spiking neural ]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>You may know my guest as the co-founder of Neuromatch, the excellent online computational neuroscience academy, or as the creator of the Brian spiking neural network simulator, which is freely available. I know him as a spiking neural network practitioner extraordinaire. Dan Goodman runs the Neural Reckoning Group at Imperial College London, where they use spiking neural networks to figure out how biological and artificial brains reckon, or compute.</p>



<p>All of the current AI we use to do all the impressive things we do, essentially all of it, is built on artificial neural networks. Notice the word "neural" there. That word is meant to communicate that these artificial networks do stuff the way our brains do stuff. And indeed, if you take a few steps back, spin around 10 times, take a few shots of whiskey, and squint hard enough, there is a passing resemblance. One thing you'll probably still notice, in your drunken stupor, is that, among the thousand ways ANNs differ from brains, is that they don't use action potentials, or spikes. From the perspective of neuroscience, that can seem mighty curious. Because, for decades now, neuroscience has focused on spikes as <em>the</em> things that make our cognition tick.</p>



<p>We count them and compare them in different conditions, and generally put a lot of stock in their usefulness in brains.</p>



<p>So what does it mean that modern neural networks disregard spiking altogether?</p>



<p>Maybe spiking really isn't important to process and transmit information as well as our brains do. Or maybe spiking is one among many ways for intelligent systems to function well. Dan shares some of what he's learned and how he thinks about spiking and SNNs and a host of other topics.</p>



<ul class="wp-block-list">
<li><a href="https://neural-reckoning.org/">Neural Reckoning Group</a>.</li>



<li>Twitter:&nbsp;<a href="https://twitter.com/neuralreckoning">@neuralreckoning</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.nature.com/articles/s41467-021-26022-3.pdf">Neural heterogeneity promotes robust learning</a>.</li>



<li><a href="https://arxiv.org/abs/2106.02626">Dynamics of specialization in neural modules under resource constraints</a>.</li>



<li><a href="https://neural-reckoning.org/pub_multimodal.html">Multimodal units fuse-then-accumulate evidence across channels</a>.</li>



<li><a href="https://www.dropbox.com/s/942rf97l80wyya5/snufa-meeting-report.pdf?dl=1">Visualizing a joint future of neuroscience and neuromorphic engineering</a>.</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
3:47 - Why spiking neural networks, and a mathematical background
13:16 - Efficiency
17:36 - Machine learning for neuroscience
19:38 - Why not jump ship from SNNs?
23:35 - Hard and easy tasks
29:20 - How brains and nets learn
32:50 - Exploratory vs. theory-driven science
37:32 - Static vs. dynamic
39:06 - Heterogeneity
46:01 - Unifying principles vs. a hodgepodge
50:37 - Sparsity
58:05 - Specialization and modularity
1:00:51 - Naturalistic experiments
1:03:41 - Projects for SNN research
1:05:09 - The right level of abstraction
1:07:58 - Obstacles to progress
1:12:30 - Levels of explanation
1:14:51 - What has AI taught neuroscience?
1:22:06 - How has neuroscience helped AI?</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2320/183.mp3" length="86975539" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











You may know my guest as the co-founder of Neuromatch, the excellent online computational neuroscience academy, or as the creator of the Brian spiking neural network simulator, which is freely available. I know him as a spiking neural network practitioner extraordinaire. Dan Goodman runs the Neural Reckoning Group at Imperial College London, where they use spiking neural networks to figure out how biological and artificial brains reckon, or compute.



All of the current AI we use to do all the impressive things we do, essentially all of it, is built on artificial neural networks. Notice the word "neural" there. That word is meant to communicate that these artificial networks do stuff the way our brains do stuff. And indeed, if you take a few steps back, spin around 10 times, take a few shots of whiskey, and squint hard enough, there is a passing resemblance. One thing you'll probably still notice, in your drunken stupor, is that, among the thousand ways ANNs differ from brains, is that they don't use action potentials, or spikes. From the perspective of neuroscience, that can seem mighty curious. Because, for decades now, neuroscience has focused on spikes as the things that make our cognition tick.



We count them and compare them in different conditions, and generally put a lot of stock in their usefulness in brains.



So what does it mean that modern neural networks disregard spiking altogether?



Maybe spiking really isn't important to process and transmit information as well as our brains do. Or maybe spiking is one among many ways for intelligent systems to function well. Dan shares some of what he's learned and how he thinks about spiking and SNNs and a host of other topics.




Neural Reckoning Group.



Twitter:&nbsp;@neuralreckoning.



Related papers

Neural heterogeneity promotes robust learning.



Dynamics of specialization in neural modules under resource constraints.



Multimodal units fuse-then-accumulate evidence across channels.



Visualizing a joint future of neuroscience and neuromorphic engineering.






0:00 - Intro
3:47 - Why spiking neural networks, and a mathematical background
13:16 - Efficiency
17:36 - Machine learning for neuroscience
19:38 - Why not jump ship from SNNs?
23:35 - Hard and easy tasks
29:20 - How brains and nets learn
32:50 - Exploratory vs. theory-driven science
37:32 - Static vs. dynamic
39:06 - Heterogeneity
46:01 - Unifying principles vs. a hodgepodge
50:37 - Sparsity
58:05 - Specialization and modularity
1:00:51 - Naturalistic experiments
1:03:41 - Projects for SNN research
1:05:09 - The right level of abstraction
1:07:58 - Obstacles to progress
1:12:30 - Levels of explanation
1:14:51 - What has AI taught neuroscience?
1:22:06 - How has neuroscience helped AI?]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2024/02/thumb-website.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:28:54</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











You may know my guest as the co-founder of Neuromatch, the excellent online computational neuroscience academy, or as the creator of the Brian spiking neural network simulator, which is freely available. I know him as a spiking neural network practitioner extraordinaire. Dan Goodman runs the Neural Reckoning Group at Imperial College London, where they use spiking neural networks to figure out how biological and artificial brains reckon, or compute.



All of the current AI we use to do all the impressive things we do, essentially all of it, is built on artificial neural networks. Notice the word "neural" there. That word is meant to communicate that these artificial networks do stuff the way our brains do stuff. And indeed, if you take a few steps back, spin around 10 times, take a few shots of whiskey, and squint hard enough, there is a passing resemblance. One thing you'll probably still]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2024/02/thumb-website.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 182: John Krakauer Returns… Again</title>
	<link>https://braininspired.co/podcast/182/</link>
	<pubDate>Fri, 19 Jan 2024 15:48:28 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2317</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>







<p>John Krakauer has been on the podcast multiple times (see links below). Today we discuss some topics framed around what he's been working on and thinking about lately. Things like</p>



<ul class="wp-block-list">
<li>Whether brains actually reorganize after damage</li>



<li>The role of brain plasticity in general</li>



<li>The path toward and the path <em>not</em> toward understanding higher cognition</li>



<li>How to fix motor problems after strokes</li>



<li>AGI</li>



<li>Functionalism, consciousness, and much more.</li>
</ul>



<p>Relevant links:</p>



<ul class="wp-block-list">
<li><a href="http://blam-lab.org">John's Lab.</a></li>



<li>Twitter:&nbsp;<a href="https://twitter.com/blamlab">@blamlab</a></li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.thetransmitter.org/representation/what-are-we-talking-about-clarifying-the-fuzzy-concept-of-representation-in-neuroscience-and-beyond/">What are we talking about? Clarifying the fuzzy concept of representation in neuroscience and beyond</a>.</li>



<li><a href="https://elifesciences.org/articles/84716">Against cortical reorganisation</a>.</li>
</ul>
</li>



<li>Other episodes with John:
<ul class="wp-block-list">
<li><a href="https://braininspired.co/podcast/25/">BI 025 John Krakauer: Understanding Cognition</a></li>



<li><a href="https://braininspired.co/podcast/77/">BI 077 David and John Krakauer: Part 1</a></li>



<li><a href="https://braininspired.co/podcast/78/">BI 078 David and John Krakauer: Part 2</a></li>



<li><a href="https://braininspired.co/podcast/113/">BI 113 David Barack and John Krakauer: Two Views On Cognition</a></li>
</ul>
</li>
</ul>



<p>Time stamps
0:00 - Intro
2:07 - It's a podcast episode!
6:47 - Stroke and Sherrington neuroscience
19:26 - Thinking vs. moving, representations
34:15 - What's special about humans?
56:35 - Does cortical reorganization happen?
1:14:08 - Current era in neuroscience</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about whats missing in AI and Neuroscience











John Krakauer has been on the podcast multiple times (see links below). Today]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>







<p>John Krakauer has been on the podcast multiple times (see links below). Today we discuss some topics framed around what he's been working on and thinking about lately. Things like</p>



<ul class="wp-block-list">
<li>Whether brains actually reorganize after damage</li>



<li>The role of brain plasticity in general</li>



<li>The path toward and the path <em>not</em> toward understanding higher cognition</li>



<li>How to fix motor problems after strokes</li>



<li>AGI</li>



<li>Functionalism, consciousness, and much more.</li>
</ul>



<p>Relevant links:</p>



<ul class="wp-block-list">
<li><a href="http://blam-lab.org">John's Lab.</a></li>



<li>Twitter:&nbsp;<a href="https://twitter.com/blamlab">@blamlab</a></li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.thetransmitter.org/representation/what-are-we-talking-about-clarifying-the-fuzzy-concept-of-representation-in-neuroscience-and-beyond/">What are we talking about? Clarifying the fuzzy concept of representation in neuroscience and beyond</a>.</li>



<li><a href="https://elifesciences.org/articles/84716">Against cortical reorganisation</a>.</li>
</ul>
</li>



<li>Other episodes with John:
<ul class="wp-block-list">
<li><a href="https://braininspired.co/podcast/25/">BI 025 John Krakauer: Understanding Cognition</a></li>



<li><a href="https://braininspired.co/podcast/77/">BI 077 David and John Krakauer: Part 1</a></li>



<li><a href="https://braininspired.co/podcast/78/">BI 078 David and John Krakauer: Part 2</a></li>



<li><a href="https://braininspired.co/podcast/113/">BI 113 David Barack and John Krakauer: Two Views On Cognition</a></li>
</ul>
</li>
</ul>



<p>Time stamps
0:00 - Intro
2:07 - It's a podcast episode!
6:47 - Stroke and Sherrington neuroscience
19:26 - Thinking vs. moving, representations
34:15 - What's special about humans?
56:35 - Does cortical reorganization happen?
1:14:08 - Current era in neuroscience</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2317/182.mp3" length="82987746" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience











John Krakauer has been on the podcast multiple times (see links below). Today we discuss some topics framed around what he's been working on and thinking about lately. Things like




Whether brains actually reorganize after damage



The role of brain plasticity in general



The path toward and the path not toward understanding higher cognition



How to fix motor problems after strokes



AGI



Functionalism, consciousness, and much more.




Relevant links:




John's Lab.



Twitter:&nbsp;@blamlab



Related papers

What are we talking about? Clarifying the fuzzy concept of representation in neuroscience and beyond.



Against cortical reorganisation.





Other episodes with John:

BI 025 John Krakauer: Understanding Cognition



BI 077 David and John Krakauer: Part 1



BI 078 David and John Krakauer: Part 2



BI 113 David Barack and John Krakauer: Two Views On Cognition






Time stamps
0:00 - Intro
2:07 - It's a podcast episode!
6:47 - Stroke and Sherrington neuroscience
19:26 - Thinking vs. moving, representations
34:15 - What's special about humans?
56:35 - Does cortical reorganization happen?
1:14:08 - Current era in neuroscience]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2024/01/thumb-website.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:25:42</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience











John Krakauer has been on the podcast multiple times (see links below). Today we discuss some topics framed around what he's been working on and thinking about lately. Things like




Whether brains actually reorganize after damage



The role of brain plasticity in general



The path toward and the path not toward understanding higher cognition



How to fix motor problems after strokes



AGI



Functionalism, consciousness, and much more.




Relevant links:




John's Lab.



Twitter:&nbsp;@blamlab



Related papers

What are we talking about? Clarifying the fuzzy concept of representation in neuroscience and beyond.



Against cortical reorganisation.





Other episodes with John:

BI 025 John Krakauer: Understanding Cognition



BI 077 David and John Krakauer: Part 1



BI 078 David and John Krakauer: ]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2024/01/thumb-website.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 181 Max Bennett: A Brief History of Intelligence</title>
	<link>https://braininspired.co/podcast/181/</link>
	<pubDate>Mon, 25 Dec 2023 21:32:20 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2312</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>









<p>By day, Max Bennett is an entrepreneur. He has cofounded and CEO'd multiple AI and technology companies. By many other countless hours, he has studied brain related sciences. Those long hours of research have payed off in the form of this book, <a href="https://amzn.to/3RSy0iQ">A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains.</a></p>



<p>Three lines of research formed the basis for how Max synthesized knowledge into the ideas in his current book: findings from comparative psychology (comparing brains and minds of different species), evolutionary neuroscience (how brains have evolved), and artificial intelligence, especially the algorithms developed to carry out functions. We go through I think all five of the breakthroughs in some capacity. A recurring theme is that each breakthrough may explain multiple new abilities. For example, the evolution of the neocortex may have endowed early mammals with the ability to simulate or imagine what isn't immediately present, and this ability might further explain mammals' capacity to engage in vicarious trial and error (imagining possible actions before trying them out), the capacity to engage in counterfactual learning (what would have happened if things went differently than they did), and the capacity for episodic memory and imagination.</p>



<p>The book is filled with unifying accounts like that, and it makes for a great read. Strap in, because Max gives a sort of masterclass about many of the ideas in his book.</p>



<ul class="wp-block-list">
<li>Twitter:
<ul class="wp-block-list">
<li><a href="https://twitter.com/maxsbennett">@maxsbennett</a></li>
</ul>
</li>



<li>Book:
<ul class="wp-block-list">
<li><a href="https://amzn.to/3RSy0iQ">A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains</a>.</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
5:26 - Why evolution is important
7:22 - Maclean's triune brain
14:59 - Breakthrough 1: Steering
29:06 - Fish intelligence
40:38 - Breakthrough 3: Mentalizing
52:44 - How could we improve the human brain?
1:00:44 - What is intelligence?
1:13:50 - Breakthrough 5: Speaking</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about whats missing in AI and Neuroscience













By day, Max Bennett is an entrepreneur. He has cofounded and CEOd multiple ]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>









<p>By day, Max Bennett is an entrepreneur. He has cofounded and CEO'd multiple AI and technology companies. By many other countless hours, he has studied brain related sciences. Those long hours of research have payed off in the form of this book, <a href="https://amzn.to/3RSy0iQ">A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains.</a></p>



<p>Three lines of research formed the basis for how Max synthesized knowledge into the ideas in his current book: findings from comparative psychology (comparing brains and minds of different species), evolutionary neuroscience (how brains have evolved), and artificial intelligence, especially the algorithms developed to carry out functions. We go through I think all five of the breakthroughs in some capacity. A recurring theme is that each breakthrough may explain multiple new abilities. For example, the evolution of the neocortex may have endowed early mammals with the ability to simulate or imagine what isn't immediately present, and this ability might further explain mammals' capacity to engage in vicarious trial and error (imagining possible actions before trying them out), the capacity to engage in counterfactual learning (what would have happened if things went differently than they did), and the capacity for episodic memory and imagination.</p>



<p>The book is filled with unifying accounts like that, and it makes for a great read. Strap in, because Max gives a sort of masterclass about many of the ideas in his book.</p>



<ul class="wp-block-list">
<li>Twitter:
<ul class="wp-block-list">
<li><a href="https://twitter.com/maxsbennett">@maxsbennett</a></li>
</ul>
</li>



<li>Book:
<ul class="wp-block-list">
<li><a href="https://amzn.to/3RSy0iQ">A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains</a>.</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
5:26 - Why evolution is important
7:22 - Maclean's triune brain
14:59 - Breakthrough 1: Steering
29:06 - Fish intelligence
40:38 - Breakthrough 3: Mentalizing
52:44 - How could we improve the human brain?
1:00:44 - What is intelligence?
1:13:50 - Breakthrough 5: Speaking</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2312/181.mp3" length="84875924" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience













By day, Max Bennett is an entrepreneur. He has cofounded and CEO'd multiple AI and technology companies. By many other countless hours, he has studied brain related sciences. Those long hours of research have payed off in the form of this book, A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains.



Three lines of research formed the basis for how Max synthesized knowledge into the ideas in his current book: findings from comparative psychology (comparing brains and minds of different species), evolutionary neuroscience (how brains have evolved), and artificial intelligence, especially the algorithms developed to carry out functions. We go through I think all five of the breakthroughs in some capacity. A recurring theme is that each breakthrough may explain multiple new abilities. For example, the evolution of the neocortex may have endowed early mammals with the ability to simulate or imagine what isn't immediately present, and this ability might further explain mammals' capacity to engage in vicarious trial and error (imagining possible actions before trying them out), the capacity to engage in counterfactual learning (what would have happened if things went differently than they did), and the capacity for episodic memory and imagination.



The book is filled with unifying accounts like that, and it makes for a great read. Strap in, because Max gives a sort of masterclass about many of the ideas in his book.




Twitter:

@maxsbennett





Book:

A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains.






0:00 - Intro
5:26 - Why evolution is important
7:22 - Maclean's triune brain
14:59 - Breakthrough 1: Steering
29:06 - Fish intelligence
40:38 - Breakthrough 3: Mentalizing
52:44 - How could we improve the human brain?
1:00:44 - What is intelligence?
1:13:50 - Breakthrough 5: Speaking]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2023/12/thumb-website-1.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:27:30</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience













By day, Max Bennett is an entrepreneur. He has cofounded and CEO'd multiple AI and technology companies. By many other countless hours, he has studied brain related sciences. Those long hours of research have payed off in the form of this book, A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains.



Three lines of research formed the basis for how Max synthesized knowledge into the ideas in his current book: findings from comparative psychology (comparing brains and minds of different species), evolutionary neuroscience (how brains have evolved), and artificial intelligence, especially the algorithms developed to carry out functions. We go through I think all five of the breakthroughs in some capacity. A recurring theme is that each breakthrough may explain multipl]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2023/12/thumb-website-1.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 180 Panel Discussion: Long-term Memory Encoding and Connectome Decoding</title>
	<link>https://braininspired.co/podcast/180/</link>
	<pubDate>Mon, 11 Dec 2023 14:39:11 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2310</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Welcome to another special panel discussion episode.</p>



<p>I was recently invited to moderate at discussion amongst 6 people at the annual Aspirational Neuroscience meetup. Aspirational Neuroscience is a nonprofit community run by Kenneth Hayworth. Ken has been on the podcast before <a href="https://braininspired.co/podcast/103/">on episode 103</a>. Ken helps me introduce the meetup and panel discussion for a few minutes. The goal in general was to discuss how current and developing neuroscience technologies might be used to decode a nontrivial memory from a static connectome - what the obstacles are, how to surmount those obstacles, and so on.</p>



<p>There isn't video of the event, just audio, and because we were all sharing microphones and they were being passed around, you'll hear some microphone type noise along the way - but I did my best to optimize the audio quality, and it turned out mostly quite listenable I believe.</p>



<ul class="wp-block-list">
<li><a href="https://aspirationalneuroscience.org/">Aspirational Neuroscience</a></li>



<li>Panelists:
<ul class="wp-block-list">
<li><a href="https://alleninstitute.org/person/anton-arkhipov/">Anton Arkhipov</a>, Allen Institute for Brain Science.
<ul class="wp-block-list">
<li><a href="https://twitter.com/AntonSArkhipov">@AntonSArkhipov</a></li>
</ul>
</li>



<li><a href="http://koerding.com/">Konrad Kording</a>, University of Pennsylvania.
<ul class="wp-block-list">
<li><a href="https://twitter.com/KordingLab">@KordingLab</a></li>
</ul>
</li>



<li><a href="https://www.tcd.ie/research/profiles/?profile=tryan6">Tomás Ryan</a>, Trinity College Dublin.
<ul class="wp-block-list">
<li><a href="https://twitter.com/TJRyan_77">@TJRyan_77</a></li>
</ul>
</li>



<li><a href="https://www.janelia.org/people/srinivas-turaga">Srinivas Turaga</a>, Janelia Research Campus.</li>



<li><a href="https://viterbi.usc.edu/directory/faculty/Song/Dong">Dong Song</a>, University of Southern California.
<ul class="wp-block-list">
<li><a href="https://twitter.com/dongsong">@dongsong</a></li>
</ul>
</li>



<li><a href="https://pni.princeton.edu/people/zhihao-zheng">Zhihao Zheng</a>, Princeton University.
<ul class="wp-block-list">
<li><a href="https://twitter.com/zhihaozheng">@zhihaozheng</a></li>
</ul>
</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
1:45 - Ken Hayworth
14:09 - Panel Discussion</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Welcome to another special panel discussion episode.



I was recently invited to moderate at discussion amongst 6 people at the annual Aspirational Neuroscienc]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Welcome to another special panel discussion episode.</p>



<p>I was recently invited to moderate at discussion amongst 6 people at the annual Aspirational Neuroscience meetup. Aspirational Neuroscience is a nonprofit community run by Kenneth Hayworth. Ken has been on the podcast before <a href="https://braininspired.co/podcast/103/">on episode 103</a>. Ken helps me introduce the meetup and panel discussion for a few minutes. The goal in general was to discuss how current and developing neuroscience technologies might be used to decode a nontrivial memory from a static connectome - what the obstacles are, how to surmount those obstacles, and so on.</p>



<p>There isn't video of the event, just audio, and because we were all sharing microphones and they were being passed around, you'll hear some microphone type noise along the way - but I did my best to optimize the audio quality, and it turned out mostly quite listenable I believe.</p>



<ul class="wp-block-list">
<li><a href="https://aspirationalneuroscience.org/">Aspirational Neuroscience</a></li>



<li>Panelists:
<ul class="wp-block-list">
<li><a href="https://alleninstitute.org/person/anton-arkhipov/">Anton Arkhipov</a>, Allen Institute for Brain Science.
<ul class="wp-block-list">
<li><a href="https://twitter.com/AntonSArkhipov">@AntonSArkhipov</a></li>
</ul>
</li>



<li><a href="http://koerding.com/">Konrad Kording</a>, University of Pennsylvania.
<ul class="wp-block-list">
<li><a href="https://twitter.com/KordingLab">@KordingLab</a></li>
</ul>
</li>



<li><a href="https://www.tcd.ie/research/profiles/?profile=tryan6">Tomás Ryan</a>, Trinity College Dublin.
<ul class="wp-block-list">
<li><a href="https://twitter.com/TJRyan_77">@TJRyan_77</a></li>
</ul>
</li>



<li><a href="https://www.janelia.org/people/srinivas-turaga">Srinivas Turaga</a>, Janelia Research Campus.</li>



<li><a href="https://viterbi.usc.edu/directory/faculty/Song/Dong">Dong Song</a>, University of Southern California.
<ul class="wp-block-list">
<li><a href="https://twitter.com/dongsong">@dongsong</a></li>
</ul>
</li>



<li><a href="https://pni.princeton.edu/people/zhihao-zheng">Zhihao Zheng</a>, Princeton University.
<ul class="wp-block-list">
<li><a href="https://twitter.com/zhihaozheng">@zhihaozheng</a></li>
</ul>
</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
1:45 - Ken Hayworth
14:09 - Panel Discussion</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2310/180.mp3" length="86298969" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Welcome to another special panel discussion episode.



I was recently invited to moderate at discussion amongst 6 people at the annual Aspirational Neuroscience meetup. Aspirational Neuroscience is a nonprofit community run by Kenneth Hayworth. Ken has been on the podcast before on episode 103. Ken helps me introduce the meetup and panel discussion for a few minutes. The goal in general was to discuss how current and developing neuroscience technologies might be used to decode a nontrivial memory from a static connectome - what the obstacles are, how to surmount those obstacles, and so on.



There isn't video of the event, just audio, and because we were all sharing microphones and they were being passed around, you'll hear some microphone type noise along the way - but I did my best to optimize the audio quality, and it turned out mostly quite listenable I believe.




Aspirational Neuroscience



Panelists:

Anton Arkhipov, Allen Institute for Brain Science.

@AntonSArkhipov





Konrad Kording, University of Pennsylvania.

@KordingLab





Tomás Ryan, Trinity College Dublin.

@TJRyan_77





Srinivas Turaga, Janelia Research Campus.



Dong Song, University of Southern California.

@dongsong





Zhihao Zheng, Princeton University.

@zhihaozheng








0:00 - Intro
1:45 - Ken Hayworth
14:09 - Panel Discussion]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2023/12/thumb-website.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:29:27</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Welcome to another special panel discussion episode.



I was recently invited to moderate at discussion amongst 6 people at the annual Aspirational Neuroscience meetup. Aspirational Neuroscience is a nonprofit community run by Kenneth Hayworth. Ken has been on the podcast before on episode 103. Ken helps me introduce the meetup and panel discussion for a few minutes. The goal in general was to discuss how current and developing neuroscience technologies might be used to decode a nontrivial memory from a static connectome - what the obstacles are, how to surmount those obstacles, and so on.



There isn't video of the event, just audio, and because we were all sharing microphones and they were being passed around, you'll hear some microphone type noise along the way - but I did my best to optimize the audio quality, and it turned out mostly quite listenable I believe.




Aspirational Neurosc]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2023/12/thumb-website.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 179 Laura Gradowski: Include the Fringe with Pluralism</title>
	<link>https://braininspired.co/podcast/179/</link>
	<pubDate>Mon, 27 Nov 2023 02:14:37 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2307</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>





<p>Laura Gradowski is a philosopher of science at the University of Pittsburgh. Pluralism is roughly the idea that there is no unified account of any scientific field, that we should be tolerant of and welcome a variety of theoretical and conceptual frameworks, and methods, and goals, when doing science. Pluralism is kind of a buzz word right now in my little neuroscience world, but it's an old and well-trodden notion... many philosophers have been calling for pluralism for many years. But how pluralistic should we be in our studies and explanations in science? Laura suggests we should be very, very pluralistic, and to make her case, she cites examples in the history of science of theories and theorists that were once considered "fringe" but went on to become mainstream accepted theoretical frameworks. I thought it would be fun to have her on to share her ideas about fringe theories, mainstream theories, pluralism, etc. </p>



<p>We discuss a wide range of topics, but also discuss some specific to the brain and mind sciences. Laura goes through an example of something and someone going from fringe to mainstream - the Garcia effect, named after John Garcia, whose findings went agains the grain of behaviorism, the dominant dogma of the day in psychology. But this overturning only happened after Garcia had to endure a long scientific hell of his results being ignored and shunned. So, there are multiple examples like that, and we discuss a handful. This has led Laura to the conclusion we should accept almost all theoretical frameworks, We discuss her ideas about how to implement this, where to draw the line, and much more.</p>



<p><li><a href="https://www.centerphilsci.pitt.edu/fellows/gradowski-laura/">Laura's page</a> at the Center for the Philosophy of Science at the University of Pittsburgh. </li></p>



<p><li><a rev="en_rl_none" href="https://www.proquest.com/docview/2726491918?pq-origsite=gscholar&amp;fromopenview=true">Facing the Fringe</a>.</li></p>



<p><li>Garcia's reflections on his troubles: <a rev="en_rl_none" href="https://www.appstate.edu/~steelekm/classes/psy5150/Documents/Garcia1981-tilting-at.pdf">Tilting at the Paper Mills of Academe</a></li></p>



<p>0:00 - Intro
3:57 - What is fringe?
10:14 - What makes a theory fringe?
14:31 - Fringe to mainstream
17:23 - Garcia effect
28:17 - Fringe to mainstream: other examples
32:38 - Fringe and consciousness
33:19 - Words meanings change over time
40:24 - Pseudoscience
43:25 - How fringe becomes mainstream
47:19 - More fringe characteristics
50:06 - Pluralism as a solution
54:02 - Progress
1:01:39 - Encyclopedia of theories
1:09:20 - When to reject a theory
1:20:07 - How fringe becomes fringe
1:22:50 - Marginilization
1:27:53 - Recipe for fringe theorist</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about whats missing in AI and Neuroscience









Laura Gradowski is a philosopher of science at the University of Pittsburgh. Pl]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>





<p>Laura Gradowski is a philosopher of science at the University of Pittsburgh. Pluralism is roughly the idea that there is no unified account of any scientific field, that we should be tolerant of and welcome a variety of theoretical and conceptual frameworks, and methods, and goals, when doing science. Pluralism is kind of a buzz word right now in my little neuroscience world, but it's an old and well-trodden notion... many philosophers have been calling for pluralism for many years. But how pluralistic should we be in our studies and explanations in science? Laura suggests we should be very, very pluralistic, and to make her case, she cites examples in the history of science of theories and theorists that were once considered "fringe" but went on to become mainstream accepted theoretical frameworks. I thought it would be fun to have her on to share her ideas about fringe theories, mainstream theories, pluralism, etc. </p>



<p>We discuss a wide range of topics, but also discuss some specific to the brain and mind sciences. Laura goes through an example of something and someone going from fringe to mainstream - the Garcia effect, named after John Garcia, whose findings went agains the grain of behaviorism, the dominant dogma of the day in psychology. But this overturning only happened after Garcia had to endure a long scientific hell of his results being ignored and shunned. So, there are multiple examples like that, and we discuss a handful. This has led Laura to the conclusion we should accept almost all theoretical frameworks, We discuss her ideas about how to implement this, where to draw the line, and much more.</p>



<p><li><a href="https://www.centerphilsci.pitt.edu/fellows/gradowski-laura/">Laura's page</a> at the Center for the Philosophy of Science at the University of Pittsburgh. </li></p>



<p><li><a rev="en_rl_none" href="https://www.proquest.com/docview/2726491918?pq-origsite=gscholar&amp;fromopenview=true">Facing the Fringe</a>.</li></p>



<p><li>Garcia's reflections on his troubles: <a rev="en_rl_none" href="https://www.appstate.edu/~steelekm/classes/psy5150/Documents/Garcia1981-tilting-at.pdf">Tilting at the Paper Mills of Academe</a></li></p>



<p>0:00 - Intro
3:57 - What is fringe?
10:14 - What makes a theory fringe?
14:31 - Fringe to mainstream
17:23 - Garcia effect
28:17 - Fringe to mainstream: other examples
32:38 - Fringe and consciousness
33:19 - Words meanings change over time
40:24 - Pseudoscience
43:25 - How fringe becomes mainstream
47:19 - More fringe characteristics
50:06 - Pluralism as a solution
54:02 - Progress
1:01:39 - Encyclopedia of theories
1:09:20 - When to reject a theory
1:20:07 - How fringe becomes fringe
1:22:50 - Marginilization
1:27:53 - Recipe for fringe theorist</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2307/179.mp3" length="96539150" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience









Laura Gradowski is a philosopher of science at the University of Pittsburgh. Pluralism is roughly the idea that there is no unified account of any scientific field, that we should be tolerant of and welcome a variety of theoretical and conceptual frameworks, and methods, and goals, when doing science. Pluralism is kind of a buzz word right now in my little neuroscience world, but it's an old and well-trodden notion... many philosophers have been calling for pluralism for many years. But how pluralistic should we be in our studies and explanations in science? Laura suggests we should be very, very pluralistic, and to make her case, she cites examples in the history of science of theories and theorists that were once considered "fringe" but went on to become mainstream accepted theoretical frameworks. I thought it would be fun to have her on to share her ideas about fringe theories, mainstream theories, pluralism, etc. 



We discuss a wide range of topics, but also discuss some specific to the brain and mind sciences. Laura goes through an example of something and someone going from fringe to mainstream - the Garcia effect, named after John Garcia, whose findings went agains the grain of behaviorism, the dominant dogma of the day in psychology. But this overturning only happened after Garcia had to endure a long scientific hell of his results being ignored and shunned. So, there are multiple examples like that, and we discuss a handful. This has led Laura to the conclusion we should accept almost all theoretical frameworks, We discuss her ideas about how to implement this, where to draw the line, and much more.



Laura's page at the Center for the Philosophy of Science at the University of Pittsburgh. 



Facing the Fringe.



Garcia's reflections on his troubles: Tilting at the Paper Mills of Academe



0:00 - Intro
3:57 - What is fringe?
10:14 - What makes a theory fringe?
14:31 - Fringe to mainstream
17:23 - Garcia effect
28:17 - Fringe to mainstream: other examples
32:38 - Fringe and consciousness
33:19 - Words meanings change over time
40:24 - Pseudoscience
43:25 - How fringe becomes mainstream
47:19 - More fringe characteristics
50:06 - Pluralism as a solution
54:02 - Progress
1:01:39 - Encyclopedia of theories
1:09:20 - When to reject a theory
1:20:07 - How fringe becomes fringe
1:22:50 - Marginilization
1:27:53 - Recipe for fringe theorist]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2023/11/thumb-website.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:39:06</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience









Laura Gradowski is a philosopher of science at the University of Pittsburgh. Pluralism is roughly the idea that there is no unified account of any scientific field, that we should be tolerant of and welcome a variety of theoretical and conceptual frameworks, and methods, and goals, when doing science. Pluralism is kind of a buzz word right now in my little neuroscience world, but it's an old and well-trodden notion... many philosophers have been calling for pluralism for many years. But how pluralistic should we be in our studies and explanations in science? Laura suggests we should be very, very pluralistic, and to make her case, she cites examples in the history of science of theories and theorists that were once considered "fringe" but went on to become mainstream accepted theoretical frameworks. I thought it]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2023/11/thumb-website.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 178 Eric Shea-Brown: Neural Dynamics and Dimensions</title>
	<link>https://braininspired.co/podcast/178/</link>
	<pubDate>Mon, 13 Nov 2023 20:36:19 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2304</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>







<p>Eric Shea-Brown is a theoretical neuroscientist and principle investigator of the working group on neural dynamics at the University of Washington. In this episode, we talk a lot about dynamics and dimensionality in neural networks... how to think about them, why they matter, how Eric's perspectives have changed through his career. We discuss a handful of his specific research findings about dynamics and dimensionality, like how dimensionality changes when one is performing a task versus when you're just sort of going about your day, what we can say about dynamics just by looking at different structural connection motifs, how different modes of learning can rely on different dimensionalities, and more.We also talk about how he goes about choosing what to work on and how to work on it. You'll hear in our discussion how much credit Eric gives to those surrounding him and those who came before him - he drops tons of references and names, so get ready if you want to follow up on some of the many lines of research he mentions.</p>



<ul class="wp-block-list">
<li><a href="http://faculty.washington.edu/etsb/">Eric's website</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.nature.com/articles/s41467-021-21696-1">Predictive learning as a network mechanism for extracting low-dimensional latent space representations</a>.</li>



<li><a href="https://www.cell.com/patterns/pdf/S2666-3899(22)00160-X.pdf">A scale-dependent measure of system dimensionality</a>.</li>



<li><a href="https://www.sciencedirect.com/science/article/pii/S0959438823001058">From lazy to rich to exclusive task representations in neural networks and neural codes</a>.</li>



<li><a href="http://arxiv.org/abs/1605.09073">Feedback through graph motifs relates structure and function in complex networks</a>.</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
4:15 - Reflecting on the rise of dynamical systems in neuroscience
11:15 - DST view on macro scale
15:56 - Intuitions
22:07 - Eric's approach
31:13 - Are brains more or less impressive to you now?
38:45 - Why is dimensionality important?
50:03 - High-D in Low-D
54:14 - Dynamical motifs
1:14:56 - Theory for its own sake
1:18:43 - Rich vs. lazy learning
1:22:58 - Latent variables
1:26:58 - What assumptions give you most pause?</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about whats missing in AI and Neuroscience











Eric Shea-Brown is a theoretical neuroscientist and principle investigator of]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>







<p>Eric Shea-Brown is a theoretical neuroscientist and principle investigator of the working group on neural dynamics at the University of Washington. In this episode, we talk a lot about dynamics and dimensionality in neural networks... how to think about them, why they matter, how Eric's perspectives have changed through his career. We discuss a handful of his specific research findings about dynamics and dimensionality, like how dimensionality changes when one is performing a task versus when you're just sort of going about your day, what we can say about dynamics just by looking at different structural connection motifs, how different modes of learning can rely on different dimensionalities, and more.We also talk about how he goes about choosing what to work on and how to work on it. You'll hear in our discussion how much credit Eric gives to those surrounding him and those who came before him - he drops tons of references and names, so get ready if you want to follow up on some of the many lines of research he mentions.</p>



<ul class="wp-block-list">
<li><a href="http://faculty.washington.edu/etsb/">Eric's website</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.nature.com/articles/s41467-021-21696-1">Predictive learning as a network mechanism for extracting low-dimensional latent space representations</a>.</li>



<li><a href="https://www.cell.com/patterns/pdf/S2666-3899(22)00160-X.pdf">A scale-dependent measure of system dimensionality</a>.</li>



<li><a href="https://www.sciencedirect.com/science/article/pii/S0959438823001058">From lazy to rich to exclusive task representations in neural networks and neural codes</a>.</li>



<li><a href="http://arxiv.org/abs/1605.09073">Feedback through graph motifs relates structure and function in complex networks</a>.</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
4:15 - Reflecting on the rise of dynamical systems in neuroscience
11:15 - DST view on macro scale
15:56 - Intuitions
22:07 - Eric's approach
31:13 - Are brains more or less impressive to you now?
38:45 - Why is dimensionality important?
50:03 - High-D in Low-D
54:14 - Dynamical motifs
1:14:56 - Theory for its own sake
1:18:43 - Rich vs. lazy learning
1:22:58 - Latent variables
1:26:58 - What assumptions give you most pause?</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2304/178.mp3" length="92823356" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience











Eric Shea-Brown is a theoretical neuroscientist and principle investigator of the working group on neural dynamics at the University of Washington. In this episode, we talk a lot about dynamics and dimensionality in neural networks... how to think about them, why they matter, how Eric's perspectives have changed through his career. We discuss a handful of his specific research findings about dynamics and dimensionality, like how dimensionality changes when one is performing a task versus when you're just sort of going about your day, what we can say about dynamics just by looking at different structural connection motifs, how different modes of learning can rely on different dimensionalities, and more.We also talk about how he goes about choosing what to work on and how to work on it. You'll hear in our discussion how much credit Eric gives to those surrounding him and those who came before him - he drops tons of references and names, so get ready if you want to follow up on some of the many lines of research he mentions.




Eric's website.



Related papers

Predictive learning as a network mechanism for extracting low-dimensional latent space representations.



A scale-dependent measure of system dimensionality.



From lazy to rich to exclusive task representations in neural networks and neural codes.



Feedback through graph motifs relates structure and function in complex networks.






0:00 - Intro
4:15 - Reflecting on the rise of dynamical systems in neuroscience
11:15 - DST view on macro scale
15:56 - Intuitions
22:07 - Eric's approach
31:13 - Are brains more or less impressive to you now?
38:45 - Why is dimensionality important?
50:03 - High-D in Low-D
54:14 - Dynamical motifs
1:14:56 - Theory for its own sake
1:18:43 - Rich vs. lazy learning
1:22:58 - Latent variables
1:26:58 - What assumptions give you most pause?]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2023/11/website-thumb.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:35:31</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience











Eric Shea-Brown is a theoretical neuroscientist and principle investigator of the working group on neural dynamics at the University of Washington. In this episode, we talk a lot about dynamics and dimensionality in neural networks... how to think about them, why they matter, how Eric's perspectives have changed through his career. We discuss a handful of his specific research findings about dynamics and dimensionality, like how dimensionality changes when one is performing a task versus when you're just sort of going about your day, what we can say about dynamics just by looking at different structural connection motifs, how different modes of learning can rely on different dimensionalities, and more.We also talk about how he goes about choosing what to work on and how to work on it. You'll hear in our discus]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2023/11/website-thumb.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 177 Special: Bernstein Workshop Panel</title>
	<link>https://braininspired.co/podcast/177/</link>
	<pubDate>Mon, 30 Oct 2023 01:12:31 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2300</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p>I was recently invited to moderate a panel at the Annual Bernstein conference - this one was in Berlin Germany. The panel I moderated was at a satellite workshop at the conference called <a href="https://bernstein-network.de/bernstein-conference/program/satellite-workshops/machine-learning/">How can machine learning be used to generate insights and theories in neuroscience?</a> Below are the panelists. I hope you enjoy the discussion!</p>



<ul class="wp-block-list">
<li>Program: <a href="https://bernstein-network.de/bernstein-conference/program/satellite-workshops/machine-learning/">How can machine learning be used to generate insights and theories in neuroscience?</a></li>



<li>Panelists:
<ul class="wp-block-list">
<li>Katrin Franke
<ul class="wp-block-list">
<li><a href="https://www.eye-tuebingen.de/franke/">Lab website</a>.</li>



<li>Twitter: <a href="https://twitter.com/kfrankelab">@kfrankelab</a>.</li>
</ul>
</li>



<li>Ralf Haefner
<ul class="wp-block-list">
<li><a href="https://www2.bcs.rochester.edu/sites/haefnerlab/index.html">Haefner lab</a>.</li>



<li>Twitter: <a href="https://twitter.com/haefnerlab">@haefnerlab</a>.</li>
</ul>
</li>



<li>Martin Hebart
<ul class="wp-block-list">
<li><a href="https://hebartlab.com/">Hebart Lab</a>.</li>



<li>Twitter: <a href="https://twitter.com/martin_hebart">@martin_hebart</a>.</li>
</ul>
</li>



<li>Johannes Jaeger
<ul class="wp-block-list">
<li><a href="http://www.johannesjaeger.eu/">Yogi's website</a>.</li>



<li>Twitter: <a href="https://twitter.com/yoginho">@yoginho</a>.</li>
</ul>
</li>



<li>Fred Wolf
<ul class="wp-block-list">
<li><a href="https://www.uni-goettingen.de/en/58058.html">Fred's university webpage</a>.</li>
</ul>
</li>
</ul>
</li>
</ul>



<p>Organizers:</p>



<ul class="wp-block-list">
<li>Alexander Ecker | University of Göttingen, Germany</li>



<li>Fabian Sinz | University of Göttingen, Germany</li>



<li>Mohammad Bashiri, Pavithra Elumalai, Michaela Vystrcilová | University of Göttingen, Germany</li>
</ul>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







I was recently invited to moderate a panel at the Annual Bernstein conference - this one was in Berlin Germany. The panel I moderated was at a satellite workshop ]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p>I was recently invited to moderate a panel at the Annual Bernstein conference - this one was in Berlin Germany. The panel I moderated was at a satellite workshop at the conference called <a href="https://bernstein-network.de/bernstein-conference/program/satellite-workshops/machine-learning/">How can machine learning be used to generate insights and theories in neuroscience?</a> Below are the panelists. I hope you enjoy the discussion!</p>



<ul class="wp-block-list">
<li>Program: <a href="https://bernstein-network.de/bernstein-conference/program/satellite-workshops/machine-learning/">How can machine learning be used to generate insights and theories in neuroscience?</a></li>



<li>Panelists:
<ul class="wp-block-list">
<li>Katrin Franke
<ul class="wp-block-list">
<li><a href="https://www.eye-tuebingen.de/franke/">Lab website</a>.</li>



<li>Twitter: <a href="https://twitter.com/kfrankelab">@kfrankelab</a>.</li>
</ul>
</li>



<li>Ralf Haefner
<ul class="wp-block-list">
<li><a href="https://www2.bcs.rochester.edu/sites/haefnerlab/index.html">Haefner lab</a>.</li>



<li>Twitter: <a href="https://twitter.com/haefnerlab">@haefnerlab</a>.</li>
</ul>
</li>



<li>Martin Hebart
<ul class="wp-block-list">
<li><a href="https://hebartlab.com/">Hebart Lab</a>.</li>



<li>Twitter: <a href="https://twitter.com/martin_hebart">@martin_hebart</a>.</li>
</ul>
</li>



<li>Johannes Jaeger
<ul class="wp-block-list">
<li><a href="http://www.johannesjaeger.eu/">Yogi's website</a>.</li>



<li>Twitter: <a href="https://twitter.com/yoginho">@yoginho</a>.</li>
</ul>
</li>



<li>Fred Wolf
<ul class="wp-block-list">
<li><a href="https://www.uni-goettingen.de/en/58058.html">Fred's university webpage</a>.</li>
</ul>
</li>
</ul>
</li>
</ul>



<p>Organizers:</p>



<ul class="wp-block-list">
<li>Alexander Ecker | University of Göttingen, Germany</li>



<li>Fabian Sinz | University of Göttingen, Germany</li>



<li>Mohammad Bashiri, Pavithra Elumalai, Michaela Vystrcilová | University of Göttingen, Germany</li>
</ul>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2300/177.mp3" length="71223961" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







I was recently invited to moderate a panel at the Annual Bernstein conference - this one was in Berlin Germany. The panel I moderated was at a satellite workshop at the conference called How can machine learning be used to generate insights and theories in neuroscience? Below are the panelists. I hope you enjoy the discussion!




Program: How can machine learning be used to generate insights and theories in neuroscience?



Panelists:

Katrin Franke

Lab website.



Twitter: @kfrankelab.





Ralf Haefner

Haefner lab.



Twitter: @haefnerlab.





Martin Hebart

Hebart Lab.



Twitter: @martin_hebart.





Johannes Jaeger

Yogi's website.



Twitter: @yoginho.





Fred Wolf

Fred's university webpage.








Organizers:




Alexander Ecker | University of Göttingen, Germany



Fabian Sinz | University of Göttingen, Germany



Mohammad Bashiri, Pavithra Elumalai, Michaela Vystrcilová | University of Göttingen, Germany]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2023/10/yotube-backArtboard-4.png"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:13:54</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







I was recently invited to moderate a panel at the Annual Bernstein conference - this one was in Berlin Germany. The panel I moderated was at a satellite workshop at the conference called How can machine learning be used to generate insights and theories in neuroscience? Below are the panelists. I hope you enjoy the discussion!




Program: How can machine learning be used to generate insights and theories in neuroscience?



Panelists:

Katrin Franke

Lab website.



Twitter: @kfrankelab.





Ralf Haefner

Haefner lab.



Twitter: @haefnerlab.





Martin Hebart

Hebart Lab.



Twitter: @martin_hebart.





Johannes Jaeger

Yogi's website.



Twitter: @yoginho.





Fred Wolf

Fred's university webpage.








Organizers:




Alexander Ecker | University of Göttingen, Germany



Fabian Sinz | University of Göttingen, Germany



Mohammad Bashiri, Pavithra Elumalai, Michaela Vystrcilová | Unive]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2023/10/yotube-backArtboard-4.png"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 176 David Poeppel Returns</title>
	<link>https://braininspired.co/podcast/176/</link>
	<pubDate>Sat, 14 Oct 2023 16:47:13 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2298</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>David runs <a href="http://psych.nyu.edu/clash/poeppellab/">his lab at NYU</a>, where they stud`y auditory cognition, speech perception, language, and music. On the heels of the <a href="https://braininspired.co/podcast/172/">episode with David Glanzman</a>, we discuss the ongoing mystery regarding how memory works, how to study and think about brains and minds, and the reemergence (perhaps) of the language of thought hypothesis.</p>



<p>David has been on the podcast a few times... <a href="https://braininspired.co/podcast/46/">once by himself</a>, and <a href="https://braininspired.co/podcast/84/">again with Gyorgy Buzsaki</a>.</p>



<ul class="wp-block-list">
<li><a href="http://psych.nyu.edu/clash/poeppellab/">Poeppel lab</a></li>



<li>Twitter: <a href="https://twitter.com/davidpoeppel">@davidpoeppel</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(22)00206-6?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS1364661322002066%3Fshowall%3Dtrue">We don’t know how the brain stores anything, let alone words</a>.</li>



<li><a href="https://arxiv.org/pdf/2210.01869.pdf">Memory in humans and deep language models: Linking hypotheses for model augmentation.</a></li>



<li><a href="https://www.sciencedirect.com/science/article/abs/pii/S1364661323001936">The neural ingredients for a language of thought are available.</a></li>
</ul>
</li>
</ul>



<p>0:00 - Intro
11:17 - Across levels
14:598 - Nature of memory
24:12 - Using the right tools for the right question
35:46 - LLMs, what they need, how they've shaped David's thoughts
44:55 - Across levels
54:07 - Speed of progress
1:02:21 - Neuroethology and mental illness - patreon
1:24:42 - Language of Thought</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











David runs his lab at NYU, where they study auditory cognition, speech perception, language, and music. On the heels of the episode with David Glanzman, we di]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>David runs <a href="http://psych.nyu.edu/clash/poeppellab/">his lab at NYU</a>, where they stud`y auditory cognition, speech perception, language, and music. On the heels of the <a href="https://braininspired.co/podcast/172/">episode with David Glanzman</a>, we discuss the ongoing mystery regarding how memory works, how to study and think about brains and minds, and the reemergence (perhaps) of the language of thought hypothesis.</p>



<p>David has been on the podcast a few times... <a href="https://braininspired.co/podcast/46/">once by himself</a>, and <a href="https://braininspired.co/podcast/84/">again with Gyorgy Buzsaki</a>.</p>



<ul class="wp-block-list">
<li><a href="http://psych.nyu.edu/clash/poeppellab/">Poeppel lab</a></li>



<li>Twitter: <a href="https://twitter.com/davidpoeppel">@davidpoeppel</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(22)00206-6?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS1364661322002066%3Fshowall%3Dtrue">We don’t know how the brain stores anything, let alone words</a>.</li>



<li><a href="https://arxiv.org/pdf/2210.01869.pdf">Memory in humans and deep language models: Linking hypotheses for model augmentation.</a></li>



<li><a href="https://www.sciencedirect.com/science/article/abs/pii/S1364661323001936">The neural ingredients for a language of thought are available.</a></li>
</ul>
</li>
</ul>



<p>0:00 - Intro
11:17 - Across levels
14:598 - Nature of memory
24:12 - Using the right tools for the right question
35:46 - LLMs, what they need, how they've shaped David's thoughts
44:55 - Across levels
54:07 - Speed of progress
1:02:21 - Neuroethology and mental illness - patreon
1:24:42 - Language of Thought</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2298/176.mp3" length="81395571" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











David runs his lab at NYU, where they stud`y auditory cognition, speech perception, language, and music. On the heels of the episode with David Glanzman, we discuss the ongoing mystery regarding how memory works, how to study and think about brains and minds, and the reemergence (perhaps) of the language of thought hypothesis.



David has been on the podcast a few times... once by himself, and again with Gyorgy Buzsaki.




Poeppel lab



Twitter: @davidpoeppel.



Related papers

We don’t know how the brain stores anything, let alone words.



Memory in humans and deep language models: Linking hypotheses for model augmentation.



The neural ingredients for a language of thought are available.






0:00 - Intro
11:17 - Across levels
14:598 - Nature of memory
24:12 - Using the right tools for the right question
35:46 - LLMs, what they need, how they've shaped David's thoughts
44:55 - Across levels
54:07 - Speed of progress
1:02:21 - Neuroethology and mental illness - patreon
1:24:42 - Language of Thought]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2023/10/website-thumb-1.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:23:57</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











David runs his lab at NYU, where they stud`y auditory cognition, speech perception, language, and music. On the heels of the episode with David Glanzman, we discuss the ongoing mystery regarding how memory works, how to study and think about brains and minds, and the reemergence (perhaps) of the language of thought hypothesis.



David has been on the podcast a few times... once by himself, and again with Gyorgy Buzsaki.




Poeppel lab



Twitter: @davidpoeppel.



Related papers

We don’t know how the brain stores anything, let alone words.



Memory in humans and deep language models: Linking hypotheses for model augmentation.



The neural ingredients for a language of thought are available.






0:00 - Intro
11:17 - Across levels
14:598 - Nature of memory
24:12 - Using the right tools for the right question
35:46 - LLMs, what they need, how they've shaped David's thoughts
44:55 - Acro]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2023/10/website-thumb-1.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 175 Kevin Mitchell: Free Agents</title>
	<link>https://braininspired.co/podcast/175/</link>
	<pubDate>Tue, 03 Oct 2023 10:37:16 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2292</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>







<p>Kevin Mitchell is professor of genetics at Trinity College Dublin. He's <a href="https://braininspired.co/podcast/111/">been on the podcast before</a>, and we talked a little about his previous book, Innate – How the Wiring of Our Brains Shapes Who We Are. He's back today to discuss his new book <a href="https://amzn.to/3thGq9V">Free Agents: How Evolution Gave Us Free Will</a>. The book is written very well and guides the reader through a wide range of scientific knowledge and reasoning that undergirds Kevin's main take home: our free will comes from the fact that we are biological organisms, biological organisms have agency, and as that agency evolved to become more complex and layered, so does our ability to exert free will. We touch on a handful of topics in the book, like the idea of agency, how it came about at the origin of life, and how the complexity of kinds of agency, the richness of our agency, evolved as organisms became more complex.</p>





<p>We also discuss Kevin's reliance on the indeterminacy of the universe to tell his story, the underlying randomness at fundamental levels of physics. Although indeterminacy isn't necessary for ongoing free will, it is responsible for the capacity for free will to exist in the first place. We discuss the brain's ability to harness its own randomness when needed, creativity, whether and how it's possible to create something new, artificial free will, and lots more.</p>



<ul class="wp-block-list">
<li><a href="https://www.kjmitchell.com/">Kevin's website</a>.</li>



<li>Twitter:&nbsp;<a href="https://twitter.com/WiringTheBrain?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor">@WiringtheBrain</a></li>



<li>Book: <a href="https://amzn.to/3thGq9V">Free Agents: How Evolution Gave Us Free Will</a></li>
</ul>



<p>4:27 - From Innate to Free Agents
9:14 - Thinking of the whole organism
15:11 - Who the book is for
19:49 - What bothers Kevin
27:00 - Indeterminacy
30:08 - How it all began
33:08 - How indeterminacy helps
43:58 - Libet's free will experiments
50:36 - Creativity
59:16 - Selves, subjective experience, agency, and free will
1:10:04 - Levels of agency and free will
1:20:38 - How much free will can we have?
1:28:03 - Hierarchy of mind constraints
1:36:39 - Artificial agents and free will
1:42:57 - Next book?</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about whats missing in AI and Neuroscience











Kevin Mitchell is professor of genetics at Trinity College Dublin. Hes been o]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>







<p>Kevin Mitchell is professor of genetics at Trinity College Dublin. He's <a href="https://braininspired.co/podcast/111/">been on the podcast before</a>, and we talked a little about his previous book, Innate – How the Wiring of Our Brains Shapes Who We Are. He's back today to discuss his new book <a href="https://amzn.to/3thGq9V">Free Agents: How Evolution Gave Us Free Will</a>. The book is written very well and guides the reader through a wide range of scientific knowledge and reasoning that undergirds Kevin's main take home: our free will comes from the fact that we are biological organisms, biological organisms have agency, and as that agency evolved to become more complex and layered, so does our ability to exert free will. We touch on a handful of topics in the book, like the idea of agency, how it came about at the origin of life, and how the complexity of kinds of agency, the richness of our agency, evolved as organisms became more complex.</p>





<p>We also discuss Kevin's reliance on the indeterminacy of the universe to tell his story, the underlying randomness at fundamental levels of physics. Although indeterminacy isn't necessary for ongoing free will, it is responsible for the capacity for free will to exist in the first place. We discuss the brain's ability to harness its own randomness when needed, creativity, whether and how it's possible to create something new, artificial free will, and lots more.</p>



<ul class="wp-block-list">
<li><a href="https://www.kjmitchell.com/">Kevin's website</a>.</li>



<li>Twitter:&nbsp;<a href="https://twitter.com/WiringTheBrain?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor">@WiringtheBrain</a></li>



<li>Book: <a href="https://amzn.to/3thGq9V">Free Agents: How Evolution Gave Us Free Will</a></li>
</ul>



<p>4:27 - From Innate to Free Agents
9:14 - Thinking of the whole organism
15:11 - Who the book is for
19:49 - What bothers Kevin
27:00 - Indeterminacy
30:08 - How it all began
33:08 - How indeterminacy helps
43:58 - Libet's free will experiments
50:36 - Creativity
59:16 - Selves, subjective experience, agency, and free will
1:10:04 - Levels of agency and free will
1:20:38 - How much free will can we have?
1:28:03 - Hierarchy of mind constraints
1:36:39 - Artificial agents and free will
1:42:57 - Next book?</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2292/175.mp3" length="103521752" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience











Kevin Mitchell is professor of genetics at Trinity College Dublin. He's been on the podcast before, and we talked a little about his previous book, Innate – How the Wiring of Our Brains Shapes Who We Are. He's back today to discuss his new book Free Agents: How Evolution Gave Us Free Will. The book is written very well and guides the reader through a wide range of scientific knowledge and reasoning that undergirds Kevin's main take home: our free will comes from the fact that we are biological organisms, biological organisms have agency, and as that agency evolved to become more complex and layered, so does our ability to exert free will. We touch on a handful of topics in the book, like the idea of agency, how it came about at the origin of life, and how the complexity of kinds of agency, the richness of our agency, evolved as organisms became more complex.





We also discuss Kevin's reliance on the indeterminacy of the universe to tell his story, the underlying randomness at fundamental levels of physics. Although indeterminacy isn't necessary for ongoing free will, it is responsible for the capacity for free will to exist in the first place. We discuss the brain's ability to harness its own randomness when needed, creativity, whether and how it's possible to create something new, artificial free will, and lots more.




Kevin's website.



Twitter:&nbsp;@WiringtheBrain



Book: Free Agents: How Evolution Gave Us Free Will




4:27 - From Innate to Free Agents
9:14 - Thinking of the whole organism
15:11 - Who the book is for
19:49 - What bothers Kevin
27:00 - Indeterminacy
30:08 - How it all began
33:08 - How indeterminacy helps
43:58 - Libet's free will experiments
50:36 - Creativity
59:16 - Selves, subjective experience, agency, and free will
1:10:04 - Levels of agency and free will
1:20:38 - How much free will can we have?
1:28:03 - Hierarchy of mind constraints
1:36:39 - Artificial agents and free will
1:42:57 - Next book?]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2023/10/website-thumb.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:46:32</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience











Kevin Mitchell is professor of genetics at Trinity College Dublin. He's been on the podcast before, and we talked a little about his previous book, Innate – How the Wiring of Our Brains Shapes Who We Are. He's back today to discuss his new book Free Agents: How Evolution Gave Us Free Will. The book is written very well and guides the reader through a wide range of scientific knowledge and reasoning that undergirds Kevin's main take home: our free will comes from the fact that we are biological organisms, biological organisms have agency, and as that agency evolved to become more complex and layered, so does our ability to exert free will. We touch on a handful of topics in the book, like the idea of agency, how it came about at the origin of life, and how the complexity of kinds of agency, the richness of our ]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2023/10/website-thumb.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 174 Alicia Juarrero: Context Changes Everything</title>
	<link>https://braininspired.co/podcast/174/</link>
	<pubDate>Wed, 13 Sep 2023 13:06:24 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2287</guid>
	<description><![CDATA[<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Alicia Juarrero is a philosopher and has been interested in complexity since before it was cool.</p>





<p>In this episode, we discuss many of the topics and ideas in her new book, <a href="https://mitpress.mit.edu/9780262545662/context-changes-everything/">Context Changes Everything: How Constraints Create Coherence</a>, which makes the thorough case that constraints should be given way more attention when trying to understand complex systems like brains and minds - how they're organized, how they operate, how they're formed and maintained, and so on. Modern science, thanks in large part to the success of physics, focuses on a single kind of causation - the kind involved when one billiard ball strikes another billiard ball. But that kind of causation neglects what Alicia argues are the most important features of complex systems the constraints that shape the dynamics and possibility spaces of systems. Much of Alicia's book describes the wide range of types of constraints we should be paying attention to, and how they interact and mutually influence each other. I highly recommend the book, and you may want to read it before, during, and after our conversation. That's partly because, if you're like me, the concepts she discusses still aren't comfortable to think about the way we're used to thinking about how things interact. Thinking across levels of organization turns out to be hard. You might also want her book handy because, hang on to your hats, we jump around a lot among those concepts. Context Changes everything comes about 25 years after her previous classic, Dynamics In Action, which we also discuss and which I also recommend if you want more of a primer to her newer more expansive work. Alicia's work touches on all things complex, from self-organizing systems like whirlpools, to ecologies, businesses, societies, and of course minds and brains.</p>





<ul class="wp-block-list">
<li>Book:
<ul class="wp-block-list">
<li><a href="https://mitpress.mit.edu/9780262545662/context-changes-everything/">Context Changes Everything: How Constraints Create Coherence</a></li>
</ul>
</li>
</ul>



<p>0:00 - Intro
3:37 - 25 years thinking about constraints
8:45 - Dynamics in Action and eliminativism
13:08 - Efficient and other kinds of causation
19:04 - Complexity via context independent and dependent constraints
25:53 - Enabling and limiting constraints
30:55 - Across scales
36:32 - Temporal constraints
42:58 - A constraint cookbook?
52:12 - Constraints in a mechanistic worldview
53:42 - How to explain using constraints
56:22 - Concepts and multiple realizabillity
59:00 - Kevin Mitchell question
1:08:07 - Mac Shine Question
1:19:07 - 4E
1:21:38 - Dimensionality across levels
1:27:26 - AI and constraints
1:33:08 - AI and life</p>]]></description>
	<itunes:subtitle><![CDATA[Check out my free video series about whats missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.









Alicia Juarrero is a philosopher and has been interested in complexity since be]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Alicia Juarrero is a philosopher and has been interested in complexity since before it was cool.</p>





<p>In this episode, we discuss many of the topics and ideas in her new book, <a href="https://mitpress.mit.edu/9780262545662/context-changes-everything/">Context Changes Everything: How Constraints Create Coherence</a>, which makes the thorough case that constraints should be given way more attention when trying to understand complex systems like brains and minds - how they're organized, how they operate, how they're formed and maintained, and so on. Modern science, thanks in large part to the success of physics, focuses on a single kind of causation - the kind involved when one billiard ball strikes another billiard ball. But that kind of causation neglects what Alicia argues are the most important features of complex systems the constraints that shape the dynamics and possibility spaces of systems. Much of Alicia's book describes the wide range of types of constraints we should be paying attention to, and how they interact and mutually influence each other. I highly recommend the book, and you may want to read it before, during, and after our conversation. That's partly because, if you're like me, the concepts she discusses still aren't comfortable to think about the way we're used to thinking about how things interact. Thinking across levels of organization turns out to be hard. You might also want her book handy because, hang on to your hats, we jump around a lot among those concepts. Context Changes everything comes about 25 years after her previous classic, Dynamics In Action, which we also discuss and which I also recommend if you want more of a primer to her newer more expansive work. Alicia's work touches on all things complex, from self-organizing systems like whirlpools, to ecologies, businesses, societies, and of course minds and brains.</p>





<ul class="wp-block-list">
<li>Book:
<ul class="wp-block-list">
<li><a href="https://mitpress.mit.edu/9780262545662/context-changes-everything/">Context Changes Everything: How Constraints Create Coherence</a></li>
</ul>
</li>
</ul>



<p>0:00 - Intro
3:37 - 25 years thinking about constraints
8:45 - Dynamics in Action and eliminativism
13:08 - Efficient and other kinds of causation
19:04 - Complexity via context independent and dependent constraints
25:53 - Enabling and limiting constraints
30:55 - Across scales
36:32 - Temporal constraints
42:58 - A constraint cookbook?
52:12 - Constraints in a mechanistic worldview
53:42 - How to explain using constraints
56:22 - Concepts and multiple realizabillity
59:00 - Kevin Mitchell question
1:08:07 - Mac Shine Question
1:19:07 - 4E
1:21:38 - Dimensionality across levels
1:27:26 - AI and constraints
1:33:08 - AI and life</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2287/174.mp3" length="102292408" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Check out my free video series about what's missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.









Alicia Juarrero is a philosopher and has been interested in complexity since before it was cool.





In this episode, we discuss many of the topics and ideas in her new book, Context Changes Everything: How Constraints Create Coherence, which makes the thorough case that constraints should be given way more attention when trying to understand complex systems like brains and minds - how they're organized, how they operate, how they're formed and maintained, and so on. Modern science, thanks in large part to the success of physics, focuses on a single kind of causation - the kind involved when one billiard ball strikes another billiard ball. But that kind of causation neglects what Alicia argues are the most important features of complex systems the constraints that shape the dynamics and possibility spaces of systems. Much of Alicia's book describes the wide range of types of constraints we should be paying attention to, and how they interact and mutually influence each other. I highly recommend the book, and you may want to read it before, during, and after our conversation. That's partly because, if you're like me, the concepts she discusses still aren't comfortable to think about the way we're used to thinking about how things interact. Thinking across levels of organization turns out to be hard. You might also want her book handy because, hang on to your hats, we jump around a lot among those concepts. Context Changes everything comes about 25 years after her previous classic, Dynamics In Action, which we also discuss and which I also recommend if you want more of a primer to her newer more expansive work. Alicia's work touches on all things complex, from self-organizing systems like whirlpools, to ecologies, businesses, societies, and of course minds and brains.






Book:

Context Changes Everything: How Constraints Create Coherence






0:00 - Intro
3:37 - 25 years thinking about constraints
8:45 - Dynamics in Action and eliminativism
13:08 - Efficient and other kinds of causation
19:04 - Complexity via context independent and dependent constraints
25:53 - Enabling and limiting constraints
30:55 - Across scales
36:32 - Temporal constraints
42:58 - A constraint cookbook?
52:12 - Constraints in a mechanistic worldview
53:42 - How to explain using constraints
56:22 - Concepts and multiple realizabillity
59:00 - Kevin Mitchell question
1:08:07 - Mac Shine Question
1:19:07 - 4E
1:21:38 - Dimensionality across levels
1:27:26 - AI and constraints
1:33:08 - AI and life]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2023/09/thumb-website.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:45:00</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Check out my free video series about what's missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.









Alicia Juarrero is a philosopher and has been interested in complexity since before it was cool.





In this episode, we discuss many of the topics and ideas in her new book, Context Changes Everything: How Constraints Create Coherence, which makes the thorough case that constraints should be given way more attention when trying to understand complex systems like brains and minds - how they're organized, how they operate, how they're formed and maintained, and so on. Modern science, thanks in large part to the success of physics, focuses on a single kind of causation - the kind involved when one billiard ball strikes another billiard ball. But that kind of causation neglects what Alicia argues are the most important features of complex systems the constraints that shape the dynamics and possibility spaces of sy]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2023/09/thumb-website.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 173 Justin Wood: Origins of Visual Intelligence</title>
	<link>https://braininspired.co/podcast/173/</link>
	<pubDate>Wed, 30 Aug 2023 13:30:47 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2274</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>In the intro, I mention the Bernstein conference workshop I'll participate in, called <a href="https://bernstein-network.de/bernstein-conference/program/satellite-workshops/machine-learning/">How can machine learning be used to generate insights and theories in neuroscience?</a>. Follow that link to learn more, and <a href="https://bernstein-network.de/bernstein-conference/registration/">register for the conference here</a>. Hope to see you there in late September in Berlin!</p>





<p>Justin Wood runs the Wood Lab at Indiana University, and his lab's tagline is "building newborn minds in virtual worlds." In this episode, we discuss his work comparing the visual cognition of newborn chicks and AI models. He uses a controlled-rearing technique with natural chicks, whereby the chicks are raised from birth in completely controlled visual environments. That way, Justin can present designed visual stimuli to test what kinds of visual abilities chicks have or can immediately learn. Then he can building models and AI agents that are trained on the same data as the newborn chicks. The goal is to use the models to better understand natural visual intelligence, and use what we know about natural visual intelligence to help build systems that better emulate biological organisms. We discuss some of the visual abilities of the chicks and what he's found using convolutional neural networks. Beyond vision, we discuss his work studying the development of collective behavior, which compares chicks to a model that uses CNNs, reinforcement learning, and an intrinsic curiosity reward function. All of this informs the age-old nature (nativist) vs. nurture (empiricist) debates, which Justin believes should give way to embrace both nature and nurture.</p>



<ul class="wp-block-list">
<li><a href="http://buildingamind.com/">Wood lab</a>.</li>



<li>Related papers:
<ul class="wp-block-list">
<li><a href="https://arxiv.org/abs/2112.06106">Controlled-rearing studies of newborn chicks and deep neural networks</a>.</li>



<li><a href="https://arxiv.org/abs/2111.03796">Development of collective behavior in newborn artificial agents</a>.</li>



<li><a href="https://arxiv.org/abs/2306.05582">A newborn embodied Turing test for view-invariant object recognition</a>.</li>
</ul>
</li>



<li>Justin mentions these papers:
<ul class="wp-block-list">
<li><a href="https://www.cns.nyu.edu/~tony/vns/readings/dicarlo-cox-2007.pdf">Untangling invariant object recognition (Dicarlo &amp; Cox 2007)</a></li>
</ul>
</li>
</ul>



<p>0:00 - Intro
5:39 - Origins of Justin's current research
11:17 - Controlled rearing approach
21:52 - Comparing newborns and AI models
24:11 - Nativism vs. empiricism
28:15 - CNNs and early visual cognition
29:35 - Smoothness and slowness
50:05 - Early biological development
53:27 - Naturalistic vs. highly controlled
56:30 - Collective behavior in animals and machines
1:02:34 - Curiosity and critical periods
1:09:05 - Controlled rearing vs. other developmental studies
1:13:25 - Breaking natural rules
1:16:33 - Deep RL collective behavior
1:23:16 - Bottom-up and top-down</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









In the intro, I mention the Bernstein conference workshop Ill participate in, called How can machine learning be used to generate insights and theories in neuro]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>In the intro, I mention the Bernstein conference workshop I'll participate in, called <a href="https://bernstein-network.de/bernstein-conference/program/satellite-workshops/machine-learning/">How can machine learning be used to generate insights and theories in neuroscience?</a>. Follow that link to learn more, and <a href="https://bernstein-network.de/bernstein-conference/registration/">register for the conference here</a>. Hope to see you there in late September in Berlin!</p>





<p>Justin Wood runs the Wood Lab at Indiana University, and his lab's tagline is "building newborn minds in virtual worlds." In this episode, we discuss his work comparing the visual cognition of newborn chicks and AI models. He uses a controlled-rearing technique with natural chicks, whereby the chicks are raised from birth in completely controlled visual environments. That way, Justin can present designed visual stimuli to test what kinds of visual abilities chicks have or can immediately learn. Then he can building models and AI agents that are trained on the same data as the newborn chicks. The goal is to use the models to better understand natural visual intelligence, and use what we know about natural visual intelligence to help build systems that better emulate biological organisms. We discuss some of the visual abilities of the chicks and what he's found using convolutional neural networks. Beyond vision, we discuss his work studying the development of collective behavior, which compares chicks to a model that uses CNNs, reinforcement learning, and an intrinsic curiosity reward function. All of this informs the age-old nature (nativist) vs. nurture (empiricist) debates, which Justin believes should give way to embrace both nature and nurture.</p>



<ul class="wp-block-list">
<li><a href="http://buildingamind.com/">Wood lab</a>.</li>



<li>Related papers:
<ul class="wp-block-list">
<li><a href="https://arxiv.org/abs/2112.06106">Controlled-rearing studies of newborn chicks and deep neural networks</a>.</li>



<li><a href="https://arxiv.org/abs/2111.03796">Development of collective behavior in newborn artificial agents</a>.</li>



<li><a href="https://arxiv.org/abs/2306.05582">A newborn embodied Turing test for view-invariant object recognition</a>.</li>
</ul>
</li>



<li>Justin mentions these papers:
<ul class="wp-block-list">
<li><a href="https://www.cns.nyu.edu/~tony/vns/readings/dicarlo-cox-2007.pdf">Untangling invariant object recognition (Dicarlo &amp; Cox 2007)</a></li>
</ul>
</li>
</ul>



<p>0:00 - Intro
5:39 - Origins of Justin's current research
11:17 - Controlled rearing approach
21:52 - Comparing newborns and AI models
24:11 - Nativism vs. empiricism
28:15 - CNNs and early visual cognition
29:35 - Smoothness and slowness
50:05 - Early biological development
53:27 - Naturalistic vs. highly controlled
56:30 - Collective behavior in animals and machines
1:02:34 - Curiosity and critical periods
1:09:05 - Controlled rearing vs. other developmental studies
1:13:25 - Breaking natural rules
1:16:33 - Deep RL collective behavior
1:23:16 - Bottom-up and top-down</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2274/173.mp3" length="93047261" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









In the intro, I mention the Bernstein conference workshop I'll participate in, called How can machine learning be used to generate insights and theories in neuroscience?. Follow that link to learn more, and register for the conference here. Hope to see you there in late September in Berlin!





Justin Wood runs the Wood Lab at Indiana University, and his lab's tagline is "building newborn minds in virtual worlds." In this episode, we discuss his work comparing the visual cognition of newborn chicks and AI models. He uses a controlled-rearing technique with natural chicks, whereby the chicks are raised from birth in completely controlled visual environments. That way, Justin can present designed visual stimuli to test what kinds of visual abilities chicks have or can immediately learn. Then he can building models and AI agents that are trained on the same data as the newborn chicks. The goal is to use the models to better understand natural visual intelligence, and use what we know about natural visual intelligence to help build systems that better emulate biological organisms. We discuss some of the visual abilities of the chicks and what he's found using convolutional neural networks. Beyond vision, we discuss his work studying the development of collective behavior, which compares chicks to a model that uses CNNs, reinforcement learning, and an intrinsic curiosity reward function. All of this informs the age-old nature (nativist) vs. nurture (empiricist) debates, which Justin believes should give way to embrace both nature and nurture.




Wood lab.



Related papers:

Controlled-rearing studies of newborn chicks and deep neural networks.



Development of collective behavior in newborn artificial agents.



A newborn embodied Turing test for view-invariant object recognition.





Justin mentions these papers:

Untangling invariant object recognition (Dicarlo &amp; Cox 2007)






0:00 - Intro
5:39 - Origins of Justin's current research
11:17 - Controlled rearing approach
21:52 - Comparing newborns and AI models
24:11 - Nativism vs. empiricism
28:15 - CNNs and early visual cognition
29:35 - Smoothness and slowness
50:05 - Early biological development
53:27 - Naturalistic vs. highly controlled
56:30 - Collective behavior in animals and machines
1:02:34 - Curiosity and critical periods
1:09:05 - Controlled rearing vs. other developmental studies
1:13:25 - Breaking natural rules
1:16:33 - Deep RL collective behavior
1:23:16 - Bottom-up and top-down]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2023/08/thumb-website-1.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:35:45</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









In the intro, I mention the Bernstein conference workshop I'll participate in, called How can machine learning be used to generate insights and theories in neuroscience?. Follow that link to learn more, and register for the conference here. Hope to see you there in late September in Berlin!





Justin Wood runs the Wood Lab at Indiana University, and his lab's tagline is "building newborn minds in virtual worlds." In this episode, we discuss his work comparing the visual cognition of newborn chicks and AI models. He uses a controlled-rearing technique with natural chicks, whereby the chicks are raised from birth in completely controlled visual environments. That way, Justin can present designed visual stimuli to test what kinds of visual abilities chicks have or can immediately learn. Then he can building models and AI agents that are trained on the same data as the newborn chicks. The goal ]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2023/08/thumb-website-1.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 172 David Glanzman: Memory All The Way Down</title>
	<link>https://braininspired.co/podcast/172/</link>
	<pubDate>Mon, 07 Aug 2023 10:46:27 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2235</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>David runs his lab at UCLA where he's also a distinguished professor.&nbsp; David used to believe what is currently the mainstream view, that our memories are stored in our synapses, those connections between our neurons.&nbsp; So as we learn, the synaptic connections strengthen and weaken until their just right, and that serves to preserve the memory. That's been the dominant view in neuroscience for decades, and is the fundamental principle that underlies basically all of deep learning in AI. But because of his own and others experiments, which he describes in this episode, David has come to the conclusion that memory must be stored not at the synapse, but in the nucleus of neurons, likely by some epigenetic mechanism mediated by RNA molecules. If this sounds familiar, I had Randy Gallistel on the the podcast on episode 126 to discuss similar ideas, and David discusses where he and Randy differ in their thoughts. This episode starts out pretty technical as David describes the series of experiments that changed his mind, but after that we broaden our discussion to a lot of the surrounding issues regarding whether and if his story about memory is true. And we discuss meta-issues like how old discarded ideas in science often find their way back, what it's like studying non-mainstream topic, including challenges trying to get funded for it, and so on.</p>







<ul class="wp-block-list">
<li><a href="https://www.ibp.ucla.edu/faculty/david-glanzman/">David's Faculty Page</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.sciencedirect.com/science/article/abs/pii/S0006291X21007518">The central importance of nuclear mechanisms in the storage of memory</a>.</li>



<li>David mentions Arc and virus-like transmission:
<ul class="wp-block-list">
<li><a href="https://www.cell.com/cell/fulltext/S0092-8674(17)31504-0">The Neuronal Gene Arc Encodes a Repurposed Retrotransposon Gag Protein that Mediates Intercellular RNA Transfer</a>.</li>



<li><a href="https://pubmed.ncbi.nlm.nih.gov/31953526/">Structure of an Arc-ane virus-like capsid</a>.</li>
</ul>
</li>
</ul>
</li>



<li>David mentions many of the ideas from the <a href="https://2023symposium.ibp.ucla.edu/">Pushing the Boundaries: Neuroscience, Cognition, and Life</a>&nbsp; Symposium.</li>



<li>Related episodes:
<ul class="wp-block-list">
<li><a href="https://braininspired.co/podcast/126/">BI 126 Randy Gallistel: Where Is the Engram?</a></li>



<li><a href="https://braininspired.co/podcast/127/">BI 127 Tomás Ryan: Memory, Instinct, and Forgetting</a></li>
</ul>
</li>
</ul>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











David runs his lab at UCLA where hes also a distinguished professor.&nbsp; David used to believe what is currently the mainstream view, that our memories are ]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>David runs his lab at UCLA where he's also a distinguished professor.&nbsp; David used to believe what is currently the mainstream view, that our memories are stored in our synapses, those connections between our neurons.&nbsp; So as we learn, the synaptic connections strengthen and weaken until their just right, and that serves to preserve the memory. That's been the dominant view in neuroscience for decades, and is the fundamental principle that underlies basically all of deep learning in AI. But because of his own and others experiments, which he describes in this episode, David has come to the conclusion that memory must be stored not at the synapse, but in the nucleus of neurons, likely by some epigenetic mechanism mediated by RNA molecules. If this sounds familiar, I had Randy Gallistel on the the podcast on episode 126 to discuss similar ideas, and David discusses where he and Randy differ in their thoughts. This episode starts out pretty technical as David describes the series of experiments that changed his mind, but after that we broaden our discussion to a lot of the surrounding issues regarding whether and if his story about memory is true. And we discuss meta-issues like how old discarded ideas in science often find their way back, what it's like studying non-mainstream topic, including challenges trying to get funded for it, and so on.</p>







<ul class="wp-block-list">
<li><a href="https://www.ibp.ucla.edu/faculty/david-glanzman/">David's Faculty Page</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.sciencedirect.com/science/article/abs/pii/S0006291X21007518">The central importance of nuclear mechanisms in the storage of memory</a>.</li>



<li>David mentions Arc and virus-like transmission:
<ul class="wp-block-list">
<li><a href="https://www.cell.com/cell/fulltext/S0092-8674(17)31504-0">The Neuronal Gene Arc Encodes a Repurposed Retrotransposon Gag Protein that Mediates Intercellular RNA Transfer</a>.</li>



<li><a href="https://pubmed.ncbi.nlm.nih.gov/31953526/">Structure of an Arc-ane virus-like capsid</a>.</li>
</ul>
</li>
</ul>
</li>



<li>David mentions many of the ideas from the <a href="https://2023symposium.ibp.ucla.edu/">Pushing the Boundaries: Neuroscience, Cognition, and Life</a>&nbsp; Symposium.</li>



<li>Related episodes:
<ul class="wp-block-list">
<li><a href="https://braininspired.co/podcast/126/">BI 126 Randy Gallistel: Where Is the Engram?</a></li>



<li><a href="https://braininspired.co/podcast/127/">BI 127 Tomás Ryan: Memory, Instinct, and Forgetting</a></li>
</ul>
</li>
</ul>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2235/172.mp3" length="88463504" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











David runs his lab at UCLA where he's also a distinguished professor.&nbsp; David used to believe what is currently the mainstream view, that our memories are stored in our synapses, those connections between our neurons.&nbsp; So as we learn, the synaptic connections strengthen and weaken until their just right, and that serves to preserve the memory. That's been the dominant view in neuroscience for decades, and is the fundamental principle that underlies basically all of deep learning in AI. But because of his own and others experiments, which he describes in this episode, David has come to the conclusion that memory must be stored not at the synapse, but in the nucleus of neurons, likely by some epigenetic mechanism mediated by RNA molecules. If this sounds familiar, I had Randy Gallistel on the the podcast on episode 126 to discuss similar ideas, and David discusses where he and Randy differ in their thoughts. This episode starts out pretty technical as David describes the series of experiments that changed his mind, but after that we broaden our discussion to a lot of the surrounding issues regarding whether and if his story about memory is true. And we discuss meta-issues like how old discarded ideas in science often find their way back, what it's like studying non-mainstream topic, including challenges trying to get funded for it, and so on.








David's Faculty Page.



Related papers

The central importance of nuclear mechanisms in the storage of memory.



David mentions Arc and virus-like transmission:

The Neuronal Gene Arc Encodes a Repurposed Retrotransposon Gag Protein that Mediates Intercellular RNA Transfer.



Structure of an Arc-ane virus-like capsid.







David mentions many of the ideas from the Pushing the Boundaries: Neuroscience, Cognition, and Life&nbsp; Symposium.



Related episodes:

BI 126 Randy Gallistel: Where Is the Engram?



BI 127 Tomás Ryan: Memory, Instinct, and Forgetting]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2023/08/thumb-website.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:30:58</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











David runs his lab at UCLA where he's also a distinguished professor.&nbsp; David used to believe what is currently the mainstream view, that our memories are stored in our synapses, those connections between our neurons.&nbsp; So as we learn, the synaptic connections strengthen and weaken until their just right, and that serves to preserve the memory. That's been the dominant view in neuroscience for decades, and is the fundamental principle that underlies basically all of deep learning in AI. But because of his own and others experiments, which he describes in this episode, David has come to the conclusion that memory must be stored not at the synapse, but in the nucleus of neurons, likely by some epigenetic mechanism mediated by RNA molecules. If this sounds familiar, I had Randy Gallistel on the the podcast on episode 126 to discuss similar ideas, and David discusses where he and Randy ]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2023/08/thumb-website.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 171 Mike Frank: Early Language and Cognition</title>
	<link>https://braininspired.co/podcast/171/</link>
	<pubDate>Sat, 22 Jul 2023 00:17:23 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2228</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>







<p>My guest is Michael C. Frank, better known as Mike Frank, who runs the Language and Cognition lab at Stanford. Mike's main interests center on how children learn language - in particular he focuses a lot on early word learning, and what that tells us about our other cognitive functions, like concept formation and social cognition.</p>



<p>We discuss that, his love for developing open data sets that anyone can use,</p>



<p>The dance he dances between bottom-up data-driven approaches in this big data era, traditional experimental approaches, and top-down theory-driven approaches</p>



<p>How early language learning in children differs from LLM learning</p>



<p>Mike's rational speech act model of language use, which considers the intentions or pragmatics of speakers and listeners in dialogue.</p>



<ul class="wp-block-list">
<li><a href="https://langcog.stanford.edu/">Language &amp; Cognition Lab</a></li>



<li>Twitter:&nbsp;<a href="https://twitter.com/mcxfrank">@mcxfrank</a>.
<ul class="wp-block-list">
<li>I mentioned Mike's <a href="https://twitter.com/mcxfrank/status/1643296168276033538">tweet thread</a> about saying LLMs "have" cognitive functions:</li>
</ul>
</li>



<li>Related papers:
<ul class="wp-block-list">
<li><a href="http://langcog.stanford.edu/papers_new/goodman-2016-tics.pdf">Pragmatic language interpretation as probabilistic inference.</a></li>



<li><a href="https://psyarxiv.com/yhrb4">Toward a “Standard Model” of Early Language Learning.</a></li>



<li><a href="https://psyarxiv.com/v8e56/">The pervasive role of pragmatics in early language.</a></li>



<li><a href="https://psyarxiv.com/95erq">The Structure of Developmental Variation in Early Childhood.</a></li>



<li><a href="https://arxiv.org/pdf/2006.07968.pdf">Relational reasoning and generalization using non-symbolic neural networks.</a></li>



<li><a href="https://www.pnas.org/doi/10.1073/pnas.2014196118">Unsupervised neural network models of the ventral visual stream.</a></li>
</ul>
</li>
</ul>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about whats missing in AI and Neuroscience











My guest is Michael C. Frank, better known as Mike Frank, who runs the Langua]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>







<p>My guest is Michael C. Frank, better known as Mike Frank, who runs the Language and Cognition lab at Stanford. Mike's main interests center on how children learn language - in particular he focuses a lot on early word learning, and what that tells us about our other cognitive functions, like concept formation and social cognition.</p>



<p>We discuss that, his love for developing open data sets that anyone can use,</p>



<p>The dance he dances between bottom-up data-driven approaches in this big data era, traditional experimental approaches, and top-down theory-driven approaches</p>



<p>How early language learning in children differs from LLM learning</p>



<p>Mike's rational speech act model of language use, which considers the intentions or pragmatics of speakers and listeners in dialogue.</p>



<ul class="wp-block-list">
<li><a href="https://langcog.stanford.edu/">Language &amp; Cognition Lab</a></li>



<li>Twitter:&nbsp;<a href="https://twitter.com/mcxfrank">@mcxfrank</a>.
<ul class="wp-block-list">
<li>I mentioned Mike's <a href="https://twitter.com/mcxfrank/status/1643296168276033538">tweet thread</a> about saying LLMs "have" cognitive functions:</li>
</ul>
</li>



<li>Related papers:
<ul class="wp-block-list">
<li><a href="http://langcog.stanford.edu/papers_new/goodman-2016-tics.pdf">Pragmatic language interpretation as probabilistic inference.</a></li>



<li><a href="https://psyarxiv.com/yhrb4">Toward a “Standard Model” of Early Language Learning.</a></li>



<li><a href="https://psyarxiv.com/v8e56/">The pervasive role of pragmatics in early language.</a></li>



<li><a href="https://psyarxiv.com/95erq">The Structure of Developmental Variation in Early Childhood.</a></li>



<li><a href="https://arxiv.org/pdf/2006.07968.pdf">Relational reasoning and generalization using non-symbolic neural networks.</a></li>



<li><a href="https://www.pnas.org/doi/10.1073/pnas.2014196118">Unsupervised neural network models of the ventral visual stream.</a></li>
</ul>
</li>
</ul>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2228/171.mp3" length="82274440" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience











My guest is Michael C. Frank, better known as Mike Frank, who runs the Language and Cognition lab at Stanford. Mike's main interests center on how children learn language - in particular he focuses a lot on early word learning, and what that tells us about our other cognitive functions, like concept formation and social cognition.



We discuss that, his love for developing open data sets that anyone can use,



The dance he dances between bottom-up data-driven approaches in this big data era, traditional experimental approaches, and top-down theory-driven approaches



How early language learning in children differs from LLM learning



Mike's rational speech act model of language use, which considers the intentions or pragmatics of speakers and listeners in dialogue.




Language &amp; Cognition Lab



Twitter:&nbsp;@mcxfrank.

I mentioned Mike's tweet thread about saying LLMs "have" cognitive functions:





Related papers:

Pragmatic language interpretation as probabilistic inference.



Toward a “Standard Model” of Early Language Learning.



The pervasive role of pragmatics in early language.



The Structure of Developmental Variation in Early Childhood.



Relational reasoning and generalization using non-symbolic neural networks.



Unsupervised neural network models of the ventral visual stream.]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2023/07/171-Michael-C-Frank-wevsite.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:24:40</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience











My guest is Michael C. Frank, better known as Mike Frank, who runs the Language and Cognition lab at Stanford. Mike's main interests center on how children learn language - in particular he focuses a lot on early word learning, and what that tells us about our other cognitive functions, like concept formation and social cognition.



We discuss that, his love for developing open data sets that anyone can use,



The dance he dances between bottom-up data-driven approaches in this big data era, traditional experimental approaches, and top-down theory-driven approaches



How early language learning in children differs from LLM learning



Mike's rational speech act model of language use, which considers the intentions or pragmatics of speakers and listeners in dialogue.




Language &amp; Cognition Lab



Twitt]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2023/07/171-Michael-C-Frank-wevsite.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 170 Ali Mohebi: Starting a Research Lab</title>
	<link>https://braininspired.co/podcast/170/</link>
	<pubDate>Tue, 11 Jul 2023 18:11:18 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2226</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>





<p>In this episode I have a casual chat with Ali Mohebi about his new faculty position and his plans for the future.</p>



<ul class="wp-block-list">
<li><a href="https://mohebial.com/">Ali's website</a>.</li>



<li>Twitter:&nbsp;<a href="https://twitter.com/mohebial">@mohebial</a></li>
</ul>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about whats missing in AI and Neuroscience









In this episode I have a casual chat with Ali Mohebi about his new faculty posi]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>





<p>In this episode I have a casual chat with Ali Mohebi about his new faculty position and his plans for the future.</p>



<ul class="wp-block-list">
<li><a href="https://mohebial.com/">Ali's website</a>.</li>



<li>Twitter:&nbsp;<a href="https://twitter.com/mohebial">@mohebial</a></li>
</ul>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2226/170.mp3" length="75161365" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience









In this episode I have a casual chat with Ali Mohebi about his new faculty position and his plans for the future.




Ali's website.



Twitter:&nbsp;@mohebial]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2023/07/thumb-website.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:17:15</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience









In this episode I have a casual chat with Ali Mohebi about his new faculty position and his plans for the future.




Ali's website.



Twitter:&nbsp;@mohebial]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2023/07/thumb-website.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 169 Andrea Martin: Neural Dynamics and Language</title>
	<link>https://braininspired.co/podcast/169/</link>
	<pubDate>Wed, 28 Jun 2023 18:00:16 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2217</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>





<p>My guest today is Andrea Martin, who is the Research Group Leader in the department of Language and Computation in Neural Systems at the Max Plank Institute and the Donders Institute. Andrea is deeply interested in understanding how our biological brains process and represent language. To this end, she is developing a theoretical model of language. The aim of the model is to account for the properties of language, like its structure, its compositionality, its infinite expressibility, while adhering to physiological data we <em>can </em>measure from human brains.</p>



<p>Her theoretical model of language, among other things, brings in the idea of low-dimensional manifolds and neural dynamics along those manifolds. We've discussed manifolds a lot on the podcast, but they are a kind of abstract structure in the space of possible neural population activity - the neural dynamics. And that manifold structure defines the range of possible trajectories, or pathways, the neural dynamics can take over&nbsp; time.</p>



<p>One of Andrea's ideas is that manifolds might be a way for the brain to combine two properties of how we learn and use language. One of those properties is the statistical regularities found in language - a given word, for example, occurs more often near some words and less often near some other words. This statistical approach is the foundation of how large language models are trained. The other property is the more formal structure of language: how it's arranged and organized in such a way that gives it meaning to us. Perhaps these two properties of language can come together as a single trajectory along a neural manifold. But she has lots of ideas, and we discuss many of them. And of course we discuss large language models, and how Andrea thinks of them with respect to biological cognition. We talk about modeling in general and what models do and don't tell us, and much more.</p>



<ul class="wp-block-list">
<li><a href="https://sites.google.com/site/aemn1011/home">Andrea's website.</a></li>



<li>Twitter:&nbsp;<a href="https://twitter.com/andrea_e_martin">@andrea_e_martin</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://pure.mpg.de/rest/items/item_3196366_5/component/file_3240713/content">A Compositional Neural Architecture for Language</a></li>



<li><a href="https://pure.mpg.de/rest/items/item_3335841_1/component/file_3335842/content">An oscillating computational model can track pseudo-rhythmic speech by using linguistic predictions</a></li>



<li><a href="https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3001713">Neural dynamics differentially encode phrases and sentences during spoken language comprehension</a></li>



<li><a href="https://psyarxiv.com/x59un/">Hierarchical structure in language and action: A formal comparison</a></li>
</ul>
</li>



<li>Andrea mentions this book: <a href="https://amzn.to/3NDa5Cd">The Geometry of Biological Time</a>.</li>
</ul>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about whats missing in AI and Neuroscience









My guest today is Andrea Martin, who is the Research Group Leader in the depart]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>





<p>My guest today is Andrea Martin, who is the Research Group Leader in the department of Language and Computation in Neural Systems at the Max Plank Institute and the Donders Institute. Andrea is deeply interested in understanding how our biological brains process and represent language. To this end, she is developing a theoretical model of language. The aim of the model is to account for the properties of language, like its structure, its compositionality, its infinite expressibility, while adhering to physiological data we <em>can </em>measure from human brains.</p>



<p>Her theoretical model of language, among other things, brings in the idea of low-dimensional manifolds and neural dynamics along those manifolds. We've discussed manifolds a lot on the podcast, but they are a kind of abstract structure in the space of possible neural population activity - the neural dynamics. And that manifold structure defines the range of possible trajectories, or pathways, the neural dynamics can take over&nbsp; time.</p>



<p>One of Andrea's ideas is that manifolds might be a way for the brain to combine two properties of how we learn and use language. One of those properties is the statistical regularities found in language - a given word, for example, occurs more often near some words and less often near some other words. This statistical approach is the foundation of how large language models are trained. The other property is the more formal structure of language: how it's arranged and organized in such a way that gives it meaning to us. Perhaps these two properties of language can come together as a single trajectory along a neural manifold. But she has lots of ideas, and we discuss many of them. And of course we discuss large language models, and how Andrea thinks of them with respect to biological cognition. We talk about modeling in general and what models do and don't tell us, and much more.</p>



<ul class="wp-block-list">
<li><a href="https://sites.google.com/site/aemn1011/home">Andrea's website.</a></li>



<li>Twitter:&nbsp;<a href="https://twitter.com/andrea_e_martin">@andrea_e_martin</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://pure.mpg.de/rest/items/item_3196366_5/component/file_3240713/content">A Compositional Neural Architecture for Language</a></li>



<li><a href="https://pure.mpg.de/rest/items/item_3335841_1/component/file_3335842/content">An oscillating computational model can track pseudo-rhythmic speech by using linguistic predictions</a></li>



<li><a href="https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3001713">Neural dynamics differentially encode phrases and sentences during spoken language comprehension</a></li>



<li><a href="https://psyarxiv.com/x59un/">Hierarchical structure in language and action: A formal comparison</a></li>
</ul>
</li>



<li>Andrea mentions this book: <a href="https://amzn.to/3NDa5Cd">The Geometry of Biological Time</a>.</li>
</ul>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2217/169.mp3" length="98470438" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience









My guest today is Andrea Martin, who is the Research Group Leader in the department of Language and Computation in Neural Systems at the Max Plank Institute and the Donders Institute. Andrea is deeply interested in understanding how our biological brains process and represent language. To this end, she is developing a theoretical model of language. The aim of the model is to account for the properties of language, like its structure, its compositionality, its infinite expressibility, while adhering to physiological data we can measure from human brains.



Her theoretical model of language, among other things, brings in the idea of low-dimensional manifolds and neural dynamics along those manifolds. We've discussed manifolds a lot on the podcast, but they are a kind of abstract structure in the space of possible neural population activity - the neural dynamics. And that manifold structure defines the range of possible trajectories, or pathways, the neural dynamics can take over&nbsp; time.



One of Andrea's ideas is that manifolds might be a way for the brain to combine two properties of how we learn and use language. One of those properties is the statistical regularities found in language - a given word, for example, occurs more often near some words and less often near some other words. This statistical approach is the foundation of how large language models are trained. The other property is the more formal structure of language: how it's arranged and organized in such a way that gives it meaning to us. Perhaps these two properties of language can come together as a single trajectory along a neural manifold. But she has lots of ideas, and we discuss many of them. And of course we discuss large language models, and how Andrea thinks of them with respect to biological cognition. We talk about modeling in general and what models do and don't tell us, and much more.




Andrea's website.



Twitter:&nbsp;@andrea_e_martin.



Related papers

A Compositional Neural Architecture for Language



An oscillating computational model can track pseudo-rhythmic speech by using linguistic predictions



Neural dynamics differentially encode phrases and sentences during spoken language comprehension



Hierarchical structure in language and action: A formal comparison





Andrea mentions this book: The Geometry of Biological Time.]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2023/06/thumb-2-website-1.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:41:30</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience









My guest today is Andrea Martin, who is the Research Group Leader in the department of Language and Computation in Neural Systems at the Max Plank Institute and the Donders Institute. Andrea is deeply interested in understanding how our biological brains process and represent language. To this end, she is developing a theoretical model of language. The aim of the model is to account for the properties of language, like its structure, its compositionality, its infinite expressibility, while adhering to physiological data we can measure from human brains.



Her theoretical model of language, among other things, brings in the idea of low-dimensional manifolds and neural dynamics along those manifolds. We've discussed manifolds a lot on the podcast, but they are a kind of abstract structure in the space of possible]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2023/06/thumb-2-website-1.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 168 Frauke Sandig and Eric Black w Alex Gomez-Marin: AWARE: Glimpses of Consciousness</title>
	<link>https://braininspired.co/podcast/168/</link>
	<pubDate>Fri, 02 Jun 2023 15:42:22 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2206</guid>
	<description><![CDATA[<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>This is one in a periodic series of episodes with <a href="https://braininspired.co/podcast/136/">Alex Gomez-Marin</a>, exploring how the arts and humanities can impact (neuro)science. Artistic creations, like cinema, have the ability to momentarily lower our ever-critical scientific mindset and allow us to imagine alternate possibilities and experience emotions outside our normal scientific routines. Might this feature of art potentially change our scientific attitudes and perspectives?</p>



<p>Frauke Sandig and Eric Black recently made the documentary film <a href="https://aware-film.com/">AWARE: Glimpses of Consciousness</a>, which profiles six researchers studying consciousness from different perspectives. The film is filled with rich visual imagery and conveys a sense of wonder and awe in trying to understand subjective experience, while diving deep into the reflections of the scientists and thinkers approaching the topic from their various perspectives.</p>



<p>This isn't a "normal" Brain Inspired episode, but I hope you enjoy the discussion!</p>



<ul class="wp-block-list">
<li><a href="https://aware-film.com/">AWARE: Glimpses of Consciousness</a></li>



<li><a href="https://umbrellafilms.org/">Umbrella Films</a></li>
</ul>



<p>0:00 - Intro
19:42 - Mechanistic reductionism
45:33 - Changing views during lifetime
53:49 - Did making the film alter your views?
57:49 - ChatGPT
1:04:20 - Materialist assumption
1:11:00 - Science of consciousness
1:20:49 - Transhumanism
1:32:01 - Integrity
1:36:19 - Aesthetics
1:39:50 - Response to the film</p>]]></description>
	<itunes:subtitle><![CDATA[Check out my free video series about whats missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.











This is one in a periodic series of episodes with Alex Gomez-Marin, exploring]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>This is one in a periodic series of episodes with <a href="https://braininspired.co/podcast/136/">Alex Gomez-Marin</a>, exploring how the arts and humanities can impact (neuro)science. Artistic creations, like cinema, have the ability to momentarily lower our ever-critical scientific mindset and allow us to imagine alternate possibilities and experience emotions outside our normal scientific routines. Might this feature of art potentially change our scientific attitudes and perspectives?</p>



<p>Frauke Sandig and Eric Black recently made the documentary film <a href="https://aware-film.com/">AWARE: Glimpses of Consciousness</a>, which profiles six researchers studying consciousness from different perspectives. The film is filled with rich visual imagery and conveys a sense of wonder and awe in trying to understand subjective experience, while diving deep into the reflections of the scientists and thinkers approaching the topic from their various perspectives.</p>



<p>This isn't a "normal" Brain Inspired episode, but I hope you enjoy the discussion!</p>



<ul class="wp-block-list">
<li><a href="https://aware-film.com/">AWARE: Glimpses of Consciousness</a></li>



<li><a href="https://umbrellafilms.org/">Umbrella Films</a></li>
</ul>



<p>0:00 - Intro
19:42 - Mechanistic reductionism
45:33 - Changing views during lifetime
53:49 - Did making the film alter your views?
57:49 - ChatGPT
1:04:20 - Materialist assumption
1:11:00 - Science of consciousness
1:20:49 - Transhumanism
1:32:01 - Integrity
1:36:19 - Aesthetics
1:39:50 - Response to the film</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2206/168.mp3" length="111911327" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Check out my free video series about what's missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.











This is one in a periodic series of episodes with Alex Gomez-Marin, exploring how the arts and humanities can impact (neuro)science. Artistic creations, like cinema, have the ability to momentarily lower our ever-critical scientific mindset and allow us to imagine alternate possibilities and experience emotions outside our normal scientific routines. Might this feature of art potentially change our scientific attitudes and perspectives?



Frauke Sandig and Eric Black recently made the documentary film AWARE: Glimpses of Consciousness, which profiles six researchers studying consciousness from different perspectives. The film is filled with rich visual imagery and conveys a sense of wonder and awe in trying to understand subjective experience, while diving deep into the reflections of the scientists and thinkers approaching the topic from their various perspectives.



This isn't a "normal" Brain Inspired episode, but I hope you enjoy the discussion!




AWARE: Glimpses of Consciousness



Umbrella Films




0:00 - Intro
19:42 - Mechanistic reductionism
45:33 - Changing views during lifetime
53:49 - Did making the film alter your views?
57:49 - ChatGPT
1:04:20 - Materialist assumption
1:11:00 - Science of consciousness
1:20:49 - Transhumanism
1:32:01 - Integrity
1:36:19 - Aesthetics
1:39:50 - Response to the film]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2023/06/thumb-website.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:54:42</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Check out my free video series about what's missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.











This is one in a periodic series of episodes with Alex Gomez-Marin, exploring how the arts and humanities can impact (neuro)science. Artistic creations, like cinema, have the ability to momentarily lower our ever-critical scientific mindset and allow us to imagine alternate possibilities and experience emotions outside our normal scientific routines. Might this feature of art potentially change our scientific attitudes and perspectives?



Frauke Sandig and Eric Black recently made the documentary film AWARE: Glimpses of Consciousness, which profiles six researchers studying consciousness from different perspectives. The film is filled with rich visual imagery and conveys a sense of wonder and awe in trying to understand subjective experience, while diving deep into the reflections of the scientists and thinke]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2023/06/thumb-website.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 167 Panayiota Poirazi: AI Brains Need Dendrites</title>
	<link>https://braininspired.co/podcast/167/</link>
	<pubDate>Sat, 27 May 2023 15:22:17 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2202</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>







<p>Panayiota Poirazi runs the Poirazi Lab at the FORTH Institute of Molecular Biology and Biotechnology, and Yiota loves dendrites, those branching tree-like structures sticking out of all your neurons, and she thinks you should love dendrites, too, whether you study biological or artificial intelligence. In neuroscience, the old story was that dendrites just reach out and collect incoming signals for the all-important neuron cell body to process. Yiota, and people Like Matthew Larkum, <a href="https://braininspired.co/podcast/138/">with whom I chatted in episode 138</a>, are continuing to demonstrate that dendrites are themselves computationally complex and powerful, doing many varieties of important signal transformation before signals reach the cell body. For example, in 2003, Yiota showed that because of dendrites, a single neuron can act as a two-layer artificial neural network, and since then others have shown single neurons can act as deeper and deeper multi-layer networks.&nbsp; In Yiota's opinion, an even more important function of dendrites is increased computing efficiency, something evolution favors and something artificial networks need to favor as well moving forward.</p>



<ul class="wp-block-list">
<li><a href="https://dendrites.gr/">Poirazi Lab</a>
<ul class="wp-block-list"></ul>
</li>



<li>Twitter:&nbsp;<a href="https://twitter.com/yiotapoirazi">@YiotaPoirazi</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://zenodo.org/record/4955397">Drawing Inspiration from Biological Dendrites to Empower Artificial Neural Networks</a>.</li>



<li><a href="https://doi.org/10.1038/s41583-020-0301-7">Illuminating dendritic function with computational models</a>.</li>



<li><a href="https://www.nature.com/articles/s41467-022-35747-8.pdf">Introducing the Dendrify framework for incorporating dendrites to spiking neural networks</a>.</li>



<li><a href="https://www.cell.com/neuron/fulltext/S0896-6273(03)00149-1">Pyramidal Neuron as Two-Layer Neural Network</a></li>
</ul>
</li>
</ul>



<p>0:00 - Intro
3:04 - Yiota's background
6:40 - Artificial networks and dendrites
9:24 - Dendrites special sauce?
14:50 - Where are we in understanding dendrite function?
20:29 - Algorithms, plasticity, and brains
29:00 - Functional unit of the brain
42:43 - Engrams
51:03 - Dendrites and nonlinearity
54:51 - Spiking neural networks
56:02 - Best level of biological detail
57:52 - Dendrify
1:05:41 - Experimental work
1:10:58 - Dendrites across species and development
1:16:50 - Career reflection
1:17:57 - Evolution of Yiota's thinking</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about whats missing in AI and Neuroscience











Panayiota Poirazi runs the Poirazi Lab at the FORTH Institute of Molecular Bi]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>







<p>Panayiota Poirazi runs the Poirazi Lab at the FORTH Institute of Molecular Biology and Biotechnology, and Yiota loves dendrites, those branching tree-like structures sticking out of all your neurons, and she thinks you should love dendrites, too, whether you study biological or artificial intelligence. In neuroscience, the old story was that dendrites just reach out and collect incoming signals for the all-important neuron cell body to process. Yiota, and people Like Matthew Larkum, <a href="https://braininspired.co/podcast/138/">with whom I chatted in episode 138</a>, are continuing to demonstrate that dendrites are themselves computationally complex and powerful, doing many varieties of important signal transformation before signals reach the cell body. For example, in 2003, Yiota showed that because of dendrites, a single neuron can act as a two-layer artificial neural network, and since then others have shown single neurons can act as deeper and deeper multi-layer networks.&nbsp; In Yiota's opinion, an even more important function of dendrites is increased computing efficiency, something evolution favors and something artificial networks need to favor as well moving forward.</p>



<ul class="wp-block-list">
<li><a href="https://dendrites.gr/">Poirazi Lab</a>
<ul class="wp-block-list"></ul>
</li>



<li>Twitter:&nbsp;<a href="https://twitter.com/yiotapoirazi">@YiotaPoirazi</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://zenodo.org/record/4955397">Drawing Inspiration from Biological Dendrites to Empower Artificial Neural Networks</a>.</li>



<li><a href="https://doi.org/10.1038/s41583-020-0301-7">Illuminating dendritic function with computational models</a>.</li>



<li><a href="https://www.nature.com/articles/s41467-022-35747-8.pdf">Introducing the Dendrify framework for incorporating dendrites to spiking neural networks</a>.</li>



<li><a href="https://www.cell.com/neuron/fulltext/S0896-6273(03)00149-1">Pyramidal Neuron as Two-Layer Neural Network</a></li>
</ul>
</li>
</ul>



<p>0:00 - Intro
3:04 - Yiota's background
6:40 - Artificial networks and dendrites
9:24 - Dendrites special sauce?
14:50 - Where are we in understanding dendrite function?
20:29 - Algorithms, plasticity, and brains
29:00 - Functional unit of the brain
42:43 - Engrams
51:03 - Dendrites and nonlinearity
54:51 - Spiking neural networks
56:02 - Best level of biological detail
57:52 - Dendrify
1:05:41 - Experimental work
1:10:58 - Dendrites across species and development
1:16:50 - Career reflection
1:17:57 - Evolution of Yiota's thinking</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2202/167.mp3" length="86377995" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience











Panayiota Poirazi runs the Poirazi Lab at the FORTH Institute of Molecular Biology and Biotechnology, and Yiota loves dendrites, those branching tree-like structures sticking out of all your neurons, and she thinks you should love dendrites, too, whether you study biological or artificial intelligence. In neuroscience, the old story was that dendrites just reach out and collect incoming signals for the all-important neuron cell body to process. Yiota, and people Like Matthew Larkum, with whom I chatted in episode 138, are continuing to demonstrate that dendrites are themselves computationally complex and powerful, doing many varieties of important signal transformation before signals reach the cell body. For example, in 2003, Yiota showed that because of dendrites, a single neuron can act as a two-layer artificial neural network, and since then others have shown single neurons can act as deeper and deeper multi-layer networks.&nbsp; In Yiota's opinion, an even more important function of dendrites is increased computing efficiency, something evolution favors and something artificial networks need to favor as well moving forward.




Poirazi Lab





Twitter:&nbsp;@YiotaPoirazi.



Related papers

Drawing Inspiration from Biological Dendrites to Empower Artificial Neural Networks.



Illuminating dendritic function with computational models.



Introducing the Dendrify framework for incorporating dendrites to spiking neural networks.



Pyramidal Neuron as Two-Layer Neural Network






0:00 - Intro
3:04 - Yiota's background
6:40 - Artificial networks and dendrites
9:24 - Dendrites special sauce?
14:50 - Where are we in understanding dendrite function?
20:29 - Algorithms, plasticity, and brains
29:00 - Functional unit of the brain
42:43 - Engrams
51:03 - Dendrites and nonlinearity
54:51 - Spiking neural networks
56:02 - Best level of biological detail
57:52 - Dendrify
1:05:41 - Experimental work
1:10:58 - Dendrites across species and development
1:16:50 - Career reflection
1:17:57 - Evolution of Yiota's thinking]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2023/05/thumb01-website.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:27:43</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience











Panayiota Poirazi runs the Poirazi Lab at the FORTH Institute of Molecular Biology and Biotechnology, and Yiota loves dendrites, those branching tree-like structures sticking out of all your neurons, and she thinks you should love dendrites, too, whether you study biological or artificial intelligence. In neuroscience, the old story was that dendrites just reach out and collect incoming signals for the all-important neuron cell body to process. Yiota, and people Like Matthew Larkum, with whom I chatted in episode 138, are continuing to demonstrate that dendrites are themselves computationally complex and powerful, doing many varieties of important signal transformation before signals reach the cell body. For example, in 2003, Yiota showed that because of dendrites, a single neuron can act as a two-layer artifi]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2023/05/thumb01-website.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 166 Nick Enfield: Language vs. Reality</title>
	<link>https://braininspired.co/podcast/166/</link>
	<pubDate>Tue, 09 May 2023 18:00:02 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2195</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>







<p>Nick Enfield is a professor of linguistics at the University of Sydney. In this episode we discuss topics in his most recent book, <a href="https://amzn.to/40NK3Qm">Language vs. Reality: Why Language Is Good for Lawyers and Bad for Scientists</a>. A central question in the book is what is language for? What's the function of language. You might be familiar with the debate about whether language evolved for each of us thinking our wonderful human thoughts, or for communicating those thoughts between each other. Nick would be on the communication side of that debate, but if by communication we mean simply the transmission of thoughts or information between people - I have a thought, I send it to you in language, and that thought is now in your head - then Nick wouldn't take either side of that debate. He argues the function language goes beyond the transmission of information, and instead is primarily an evolved solution for social coordination - coordinating our behaviors and attention. When we use language, we're creating maps in our heads so we can agree on where to go.</p>





<p>For example, when I say, "This is brain inspired," I'm pointing you to a place to meet me on a conceptual map, saying, "Get ready, we're about to have a great time again!"&nbsp; In any case, with those 4 words, "This is brain inspired," I'm not just transmitting information from my head into your head. I'm providing you with a landmark so you can focus your attention appropriately.</p>



<p>From that premise, that language is about social coordination, we talk about a handful of topics in his book, like the relationship <em>between</em> language and reality, the idea that all language is framing- that is, how we say something influences how to think about it. We discuss how our language changes in different social situations, the role of stories, and of course, how LLMs fit into Nick's story about language.</p>



<ul class="wp-block-list">
<li><a href="https://nickenfield.org/">Nick's website</a></li>



<li>Twitter:&nbsp;<a href="https://twitter.com/njenfield">@njenfield</a></li>



<li>Book:
<ul class="wp-block-list">
<li><a href="https://amzn.to/40NK3Qm">Language vs. Reality: Why Language Is Good for Lawyers and Bad for Scientists</a>.</li>
</ul>
</li>



<li>Papers:
<ul class="wp-block-list">
<li><a href="https://royalsocietypublishing.org/doi/10.1098/rstb.2021.0352">Linguistic concepts are self-generating choice architectures</a></li>
</ul>
</li>
</ul>



<p>0:00 - Intro
4:23 - Is learning about language important?
15:43 - Linguistic Anthropology
28:56 - Language and truth
33:57 - How special is language
46:19 - Choice architecture and framing
48:19 - Language for thinking or communication
52:30 - Agency and language
56:51 - Large language models
1:16:18 - Getting language right
1:20:48 - Social relationships and language</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about whats missing in AI and Neuroscience











Nick Enfield is a professor of linguistics at the University of Sydney. In th]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>







<p>Nick Enfield is a professor of linguistics at the University of Sydney. In this episode we discuss topics in his most recent book, <a href="https://amzn.to/40NK3Qm">Language vs. Reality: Why Language Is Good for Lawyers and Bad for Scientists</a>. A central question in the book is what is language for? What's the function of language. You might be familiar with the debate about whether language evolved for each of us thinking our wonderful human thoughts, or for communicating those thoughts between each other. Nick would be on the communication side of that debate, but if by communication we mean simply the transmission of thoughts or information between people - I have a thought, I send it to you in language, and that thought is now in your head - then Nick wouldn't take either side of that debate. He argues the function language goes beyond the transmission of information, and instead is primarily an evolved solution for social coordination - coordinating our behaviors and attention. When we use language, we're creating maps in our heads so we can agree on where to go.</p>





<p>For example, when I say, "This is brain inspired," I'm pointing you to a place to meet me on a conceptual map, saying, "Get ready, we're about to have a great time again!"&nbsp; In any case, with those 4 words, "This is brain inspired," I'm not just transmitting information from my head into your head. I'm providing you with a landmark so you can focus your attention appropriately.</p>



<p>From that premise, that language is about social coordination, we talk about a handful of topics in his book, like the relationship <em>between</em> language and reality, the idea that all language is framing- that is, how we say something influences how to think about it. We discuss how our language changes in different social situations, the role of stories, and of course, how LLMs fit into Nick's story about language.</p>



<ul class="wp-block-list">
<li><a href="https://nickenfield.org/">Nick's website</a></li>



<li>Twitter:&nbsp;<a href="https://twitter.com/njenfield">@njenfield</a></li>



<li>Book:
<ul class="wp-block-list">
<li><a href="https://amzn.to/40NK3Qm">Language vs. Reality: Why Language Is Good for Lawyers and Bad for Scientists</a>.</li>
</ul>
</li>



<li>Papers:
<ul class="wp-block-list">
<li><a href="https://royalsocietypublishing.org/doi/10.1098/rstb.2021.0352">Linguistic concepts are self-generating choice architectures</a></li>
</ul>
</li>
</ul>



<p>0:00 - Intro
4:23 - Is learning about language important?
15:43 - Linguistic Anthropology
28:56 - Language and truth
33:57 - How special is language
46:19 - Choice architecture and framing
48:19 - Language for thinking or communication
52:30 - Agency and language
56:51 - Large language models
1:16:18 - Getting language right
1:20:48 - Social relationships and language</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2195/166.mp3" length="85403105" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience











Nick Enfield is a professor of linguistics at the University of Sydney. In this episode we discuss topics in his most recent book, Language vs. Reality: Why Language Is Good for Lawyers and Bad for Scientists. A central question in the book is what is language for? What's the function of language. You might be familiar with the debate about whether language evolved for each of us thinking our wonderful human thoughts, or for communicating those thoughts between each other. Nick would be on the communication side of that debate, but if by communication we mean simply the transmission of thoughts or information between people - I have a thought, I send it to you in language, and that thought is now in your head - then Nick wouldn't take either side of that debate. He argues the function language goes beyond the transmission of information, and instead is primarily an evolved solution for social coordination - coordinating our behaviors and attention. When we use language, we're creating maps in our heads so we can agree on where to go.





For example, when I say, "This is brain inspired," I'm pointing you to a place to meet me on a conceptual map, saying, "Get ready, we're about to have a great time again!"&nbsp; In any case, with those 4 words, "This is brain inspired," I'm not just transmitting information from my head into your head. I'm providing you with a landmark so you can focus your attention appropriately.



From that premise, that language is about social coordination, we talk about a handful of topics in his book, like the relationship between language and reality, the idea that all language is framing- that is, how we say something influences how to think about it. We discuss how our language changes in different social situations, the role of stories, and of course, how LLMs fit into Nick's story about language.




Nick's website



Twitter:&nbsp;@njenfield



Book:

Language vs. Reality: Why Language Is Good for Lawyers and Bad for Scientists.





Papers:

Linguistic concepts are self-generating choice architectures






0:00 - Intro
4:23 - Is learning about language important?
15:43 - Linguistic Anthropology
28:56 - Language and truth
33:57 - How special is language
46:19 - Choice architecture and framing
48:19 - Language for thinking or communication
52:30 - Agency and language
56:51 - Large language models
1:16:18 - Getting language right
1:20:48 - Social relationships and language]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2023/05/thumb-website.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:27:12</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience











Nick Enfield is a professor of linguistics at the University of Sydney. In this episode we discuss topics in his most recent book, Language vs. Reality: Why Language Is Good for Lawyers and Bad for Scientists. A central question in the book is what is language for? What's the function of language. You might be familiar with the debate about whether language evolved for each of us thinking our wonderful human thoughts, or for communicating those thoughts between each other. Nick would be on the communication side of that debate, but if by communication we mean simply the transmission of thoughts or information between people - I have a thought, I send it to you in language, and that thought is now in your head - then Nick wouldn't take either side of that debate. He argues the function language goes beyond the ]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2023/05/thumb-website.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 165 Jeffrey Bowers: Psychology Gets No Respect</title>
	<link>https://braininspired.co/podcast/165/</link>
	<pubDate>Wed, 12 Apr 2023 15:46:38 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2192</guid>
	<description><![CDATA[<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>Jeffrey Bowers is a psychologist and professor at the University of Bristol. As you know, many of my previous guests are in the business of comparing brain activity to the activity of units in artificial neural network models, when humans or animals and the models are performing the same tasks. And a big story that has emerged over the past decade or so is that there's a remarkable similarity between the activities and representations in brains and models. This was originally found in object categorization tasks, where the goal is to name the object shown in a given image, where researchers have compared the activity in the models good at doing that to the activity in the parts of our brains good at doing that. It's been found in various other tasks using various other models and analyses, many of which we've discussed on previous episodes, and more recently a similar story has emerged regarding a similarity between language-related activity in our brains and the activity in large language models. Namely, the ability of our brains to predict an upcoming word can been correlated with the models ability to predict an upcoming word. So the word is that these deep learning type models are the best models of how our brains and cognition work.</p>



<p>However, this is where Jeff Bowers comes in and raises the psychology flag, so to speak. His message is that these predictive approaches to comparing artificial and biological cognition aren't enough, and can mask important differences between them. And what we need to do is start performing more hypothesis driven tests like those performed in psychology, for example, to ask whether the models are indeed solving tasks like our brains and minds do. Jeff and his group, among others, have been doing just that are discovering differences in models and minds that may be important if we want to use models to understand minds. We discuss some of his work and thoughts in this regard, and a lot more.</p>



<ul class="wp-block-list">
<li><a href="https://jeffbowers.blogs.bristol.ac.uk/">Website</a></li>



<li>Twitter:&nbsp;<a href="https://twitter.com/jeffrey_bowers">@jeffrey_bowers</a></li>



<li>Related papers:
<ul class="wp-block-list">
<li><a href="https://psyarxiv.com/5zf4s/">Deep Problems with Neural Network Models of Human Vision</a>.</li>



<li><a href="https://bpb-eu-w2.wpmucdn.com/blogs.bristol.ac.uk/dist/b/403/files/2017/11/bowers-tics-2017.pdf">Parallel Distributed Processing Theory in the Age of Deep Networks</a>.</li>



<li><a href="https://arxiv.org/pdf/2204.03740.pdf">Successes and critical failures of neural networks in capturing human-like speech recognition</a>.</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
3:52 - Testing neural networks
5:35 - Neuro-AI needs psychology
23:36 - Experiments in AI and neuroscience
23:51 - Why build networks like our minds?
44:55 - Vision problem spaces, solution spaces, training data
55:45 - Do we implement algorithms?
1:01:33 - Relational and combinatorial cognition
1:06:17 - Comparing representations in different networks
1:12:31 - Large language models
1:21:10 - Teaching LLMs nonsense languages</p>]]></description>
	<itunes:subtitle><![CDATA[Check out my free video series about whats missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.











Jeffrey Bowers is a psychologist and professor at the University of Bristol. ]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>Jeffrey Bowers is a psychologist and professor at the University of Bristol. As you know, many of my previous guests are in the business of comparing brain activity to the activity of units in artificial neural network models, when humans or animals and the models are performing the same tasks. And a big story that has emerged over the past decade or so is that there's a remarkable similarity between the activities and representations in brains and models. This was originally found in object categorization tasks, where the goal is to name the object shown in a given image, where researchers have compared the activity in the models good at doing that to the activity in the parts of our brains good at doing that. It's been found in various other tasks using various other models and analyses, many of which we've discussed on previous episodes, and more recently a similar story has emerged regarding a similarity between language-related activity in our brains and the activity in large language models. Namely, the ability of our brains to predict an upcoming word can been correlated with the models ability to predict an upcoming word. So the word is that these deep learning type models are the best models of how our brains and cognition work.</p>



<p>However, this is where Jeff Bowers comes in and raises the psychology flag, so to speak. His message is that these predictive approaches to comparing artificial and biological cognition aren't enough, and can mask important differences between them. And what we need to do is start performing more hypothesis driven tests like those performed in psychology, for example, to ask whether the models are indeed solving tasks like our brains and minds do. Jeff and his group, among others, have been doing just that are discovering differences in models and minds that may be important if we want to use models to understand minds. We discuss some of his work and thoughts in this regard, and a lot more.</p>



<ul class="wp-block-list">
<li><a href="https://jeffbowers.blogs.bristol.ac.uk/">Website</a></li>



<li>Twitter:&nbsp;<a href="https://twitter.com/jeffrey_bowers">@jeffrey_bowers</a></li>



<li>Related papers:
<ul class="wp-block-list">
<li><a href="https://psyarxiv.com/5zf4s/">Deep Problems with Neural Network Models of Human Vision</a>.</li>



<li><a href="https://bpb-eu-w2.wpmucdn.com/blogs.bristol.ac.uk/dist/b/403/files/2017/11/bowers-tics-2017.pdf">Parallel Distributed Processing Theory in the Age of Deep Networks</a>.</li>



<li><a href="https://arxiv.org/pdf/2204.03740.pdf">Successes and critical failures of neural networks in capturing human-like speech recognition</a>.</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
3:52 - Testing neural networks
5:35 - Neuro-AI needs psychology
23:36 - Experiments in AI and neuroscience
23:51 - Why build networks like our minds?
44:55 - Vision problem spaces, solution spaces, training data
55:45 - Do we implement algorithms?
1:01:33 - Relational and combinatorial cognition
1:06:17 - Comparing representations in different networks
1:12:31 - Large language models
1:21:10 - Teaching LLMs nonsense languages</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2192/165.mp3" length="95098494" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Check out my free video series about what's missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.











Jeffrey Bowers is a psychologist and professor at the University of Bristol. As you know, many of my previous guests are in the business of comparing brain activity to the activity of units in artificial neural network models, when humans or animals and the models are performing the same tasks. And a big story that has emerged over the past decade or so is that there's a remarkable similarity between the activities and representations in brains and models. This was originally found in object categorization tasks, where the goal is to name the object shown in a given image, where researchers have compared the activity in the models good at doing that to the activity in the parts of our brains good at doing that. It's been found in various other tasks using various other models and analyses, many of which we've discussed on previous episodes, and more recently a similar story has emerged regarding a similarity between language-related activity in our brains and the activity in large language models. Namely, the ability of our brains to predict an upcoming word can been correlated with the models ability to predict an upcoming word. So the word is that these deep learning type models are the best models of how our brains and cognition work.



However, this is where Jeff Bowers comes in and raises the psychology flag, so to speak. His message is that these predictive approaches to comparing artificial and biological cognition aren't enough, and can mask important differences between them. And what we need to do is start performing more hypothesis driven tests like those performed in psychology, for example, to ask whether the models are indeed solving tasks like our brains and minds do. Jeff and his group, among others, have been doing just that are discovering differences in models and minds that may be important if we want to use models to understand minds. We discuss some of his work and thoughts in this regard, and a lot more.




Website



Twitter:&nbsp;@jeffrey_bowers



Related papers:

Deep Problems with Neural Network Models of Human Vision.



Parallel Distributed Processing Theory in the Age of Deep Networks.



Successes and critical failures of neural networks in capturing human-like speech recognition.






0:00 - Intro
3:52 - Testing neural networks
5:35 - Neuro-AI needs psychology
23:36 - Experiments in AI and neuroscience
23:51 - Why build networks like our minds?
44:55 - Vision problem spaces, solution spaces, training data
55:45 - Do we implement algorithms?
1:01:33 - Relational and combinatorial cognition
1:06:17 - Comparing representations in different networks
1:12:31 - Large language models
1:21:10 - Teaching LLMs nonsense languages]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2023/04/thumb-website-1.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:38:45</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Check out my free video series about what's missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.











Jeffrey Bowers is a psychologist and professor at the University of Bristol. As you know, many of my previous guests are in the business of comparing brain activity to the activity of units in artificial neural network models, when humans or animals and the models are performing the same tasks. And a big story that has emerged over the past decade or so is that there's a remarkable similarity between the activities and representations in brains and models. This was originally found in object categorization tasks, where the goal is to name the object shown in a given image, where researchers have compared the activity in the models good at doing that to the activity in the parts of our brains good at doing that. It's been found in various other tasks using various other models and analyses, many of which we've ]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2023/04/thumb-website-1.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 164 Gary Lupyan: How Language Affects Thought</title>
	<link>https://braininspired.co/podcast/164/</link>
	<pubDate>Sat, 01 Apr 2023 12:07:51 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2189</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>







<p>Gary Lupyan runs the <a href="http://sapir.psych.wisc.edu/">Lupyan Lab</a> at University of Wisconsin, Madison, where he studies how language and cognition are related. In some ways, this is a continuation of the conversation I had <a href="https://braininspired.co/podcast/163/">last episode with Ellie Pavlick</a>, in that we&nbsp; partly continue to discuss large language models. But Gary is more focused on how language, and naming things, categorizing things, changes our cognition related those things. How does naming something change our perception of it, and so on. He's interested in how concepts come about, how they map onto language. So we talk about some of his work and ideas related to those topics.</p>



<p>And we actually start the discussion with some of Gary's work related the variability of individual humans' phenomenal experience, and how that affects our individual cognition. For instance, some people are more visual thinkers, others are more verbal, and there seems to be an appreciable spectrum of differences that Gary is beginning to experimentally test.</p>



<ul class="wp-block-list">
<li><a href="http://sapir.psych.wisc.edu/">Lupyan Lab</a>.</li>



<li>Twitter:&nbsp;<a href="https://twitter.com/glupyan">@glupyan</a>.</li>



<li>Related papers:
<ul class="wp-block-list">
<li><a href="http://sapir.psych.wisc.edu/papers/lupyan_uchiyama_thompson_casasanto_2023.pdf">Hidden Differences in Phenomenal Experience</a>.</li>



<li><a href="http://sapir.psych.wisc.edu/papers/nedergaard_wallentin_lupyan_2022.pdf">Verbal interference paradigms: A systematic review investigating the role of language in cognition</a>.</li>
</ul>
</li>



<li>Gary mentioned <a href="http://Richard Feynman">Richard Feynman's Ways of Thinking</a> video.</li>



<li>Gary and Andy Clark's Aeon article: <a href="https://aeon.co/essays/how-might-telepathy-actually-work-outside-the-realm-of-sci-fi">Super-cooperators</a>.</li>
</ul>



<p>0:00 - Intro
2:36 - Words and communication
14:10 - Phenomenal variability
26:24 - Co-operating minds
38:11 - Large language models
40:40 - Neuro-symbolic AI, scale
44:43 - How LLMs have changed Gary's thoughts about language
49:26 - Meaning, grounding, and language
54:26 - Development of language
58:53 - Symbols and emergence
1:03:20 - Language evolution in the LLM era
1:08:05 - Concepts
1:11:17 - How special is language?
1:18:08 - AGI</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about whats missing in AI and Neuroscience











Gary Lupyan runs the Lupyan Lab at University of Wisconsin, Madison, where he]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>







<p>Gary Lupyan runs the <a href="http://sapir.psych.wisc.edu/">Lupyan Lab</a> at University of Wisconsin, Madison, where he studies how language and cognition are related. In some ways, this is a continuation of the conversation I had <a href="https://braininspired.co/podcast/163/">last episode with Ellie Pavlick</a>, in that we&nbsp; partly continue to discuss large language models. But Gary is more focused on how language, and naming things, categorizing things, changes our cognition related those things. How does naming something change our perception of it, and so on. He's interested in how concepts come about, how they map onto language. So we talk about some of his work and ideas related to those topics.</p>



<p>And we actually start the discussion with some of Gary's work related the variability of individual humans' phenomenal experience, and how that affects our individual cognition. For instance, some people are more visual thinkers, others are more verbal, and there seems to be an appreciable spectrum of differences that Gary is beginning to experimentally test.</p>



<ul class="wp-block-list">
<li><a href="http://sapir.psych.wisc.edu/">Lupyan Lab</a>.</li>



<li>Twitter:&nbsp;<a href="https://twitter.com/glupyan">@glupyan</a>.</li>



<li>Related papers:
<ul class="wp-block-list">
<li><a href="http://sapir.psych.wisc.edu/papers/lupyan_uchiyama_thompson_casasanto_2023.pdf">Hidden Differences in Phenomenal Experience</a>.</li>



<li><a href="http://sapir.psych.wisc.edu/papers/nedergaard_wallentin_lupyan_2022.pdf">Verbal interference paradigms: A systematic review investigating the role of language in cognition</a>.</li>
</ul>
</li>



<li>Gary mentioned <a href="http://Richard Feynman">Richard Feynman's Ways of Thinking</a> video.</li>



<li>Gary and Andy Clark's Aeon article: <a href="https://aeon.co/essays/how-might-telepathy-actually-work-outside-the-realm-of-sci-fi">Super-cooperators</a>.</li>
</ul>



<p>0:00 - Intro
2:36 - Words and communication
14:10 - Phenomenal variability
26:24 - Co-operating minds
38:11 - Large language models
40:40 - Neuro-symbolic AI, scale
44:43 - How LLMs have changed Gary's thoughts about language
49:26 - Meaning, grounding, and language
54:26 - Development of language
58:53 - Symbols and emergence
1:03:20 - Language evolution in the LLM era
1:08:05 - Concepts
1:11:17 - How special is language?
1:18:08 - AGI</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2189/164.mp3" length="88524580" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience











Gary Lupyan runs the Lupyan Lab at University of Wisconsin, Madison, where he studies how language and cognition are related. In some ways, this is a continuation of the conversation I had last episode with Ellie Pavlick, in that we&nbsp; partly continue to discuss large language models. But Gary is more focused on how language, and naming things, categorizing things, changes our cognition related those things. How does naming something change our perception of it, and so on. He's interested in how concepts come about, how they map onto language. So we talk about some of his work and ideas related to those topics.



And we actually start the discussion with some of Gary's work related the variability of individual humans' phenomenal experience, and how that affects our individual cognition. For instance, some people are more visual thinkers, others are more verbal, and there seems to be an appreciable spectrum of differences that Gary is beginning to experimentally test.




Lupyan Lab.



Twitter:&nbsp;@glupyan.



Related papers:

Hidden Differences in Phenomenal Experience.



Verbal interference paradigms: A systematic review investigating the role of language in cognition.





Gary mentioned Richard Feynman's Ways of Thinking video.



Gary and Andy Clark's Aeon article: Super-cooperators.




0:00 - Intro
2:36 - Words and communication
14:10 - Phenomenal variability
26:24 - Co-operating minds
38:11 - Large language models
40:40 - Neuro-symbolic AI, scale
44:43 - How LLMs have changed Gary's thoughts about language
49:26 - Meaning, grounding, and language
54:26 - Development of language
58:53 - Symbols and emergence
1:03:20 - Language evolution in the LLM era
1:08:05 - Concepts
1:11:17 - How special is language?
1:18:08 - AGI]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2023/04/thumb-website.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:31:54</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience











Gary Lupyan runs the Lupyan Lab at University of Wisconsin, Madison, where he studies how language and cognition are related. In some ways, this is a continuation of the conversation I had last episode with Ellie Pavlick, in that we&nbsp; partly continue to discuss large language models. But Gary is more focused on how language, and naming things, categorizing things, changes our cognition related those things. How does naming something change our perception of it, and so on. He's interested in how concepts come about, how they map onto language. So we talk about some of his work and ideas related to those topics.



And we actually start the discussion with some of Gary's work related the variability of individual humans' phenomenal experience, and how that affects our individual cognition. For instance, some]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2023/04/thumb-website.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 163 Ellie Pavlick: The Mind of a Language Model</title>
	<link>https://braininspired.co/podcast/163/</link>
	<pubDate>Mon, 20 Mar 2023 19:03:18 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2182</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>







<p>Ellie Pavlick runs her <a href="https://lunar.cs.brown.edu/#">Language Understanding and Representation Lab</a> at Brown University, where she studies lots of topics related to language. In AI, large language models, sometimes called foundation models, are all the rage these days, with their ability to generate convincing language, although they still make plenty of mistakes. One of the things Ellie is interested in is how these models work, what kinds of representations are being generated in them to produce the language they produce. So we discuss how she's going about studying these models. For example, probing them to see whether something symbolic-like might be implemented in the models, even though they are the deep learning neural network type, which aren't suppose to be able to work in a symbol-like manner. We also discuss whether grounding is required for language understanding - that is, whether a model that produces language well needs to connect with the real world to actually understand the text it generates. We talk about what language is for, the current limitations of large language models, how the models compare to humans, and a lot more.</p>



<ul class="wp-block-list">
<li><a href="https://lunar.cs.brown.edu/#">Language Understanding and Representation Lab</a></li>



<li>Twitter:&nbsp;<a href="https://twitter.com/brown_nlp?lang=en">@Brown_NLP</a></li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.annualreviews.org/doi/pdf/10.1146/annurev-linguistics-031120-122924">Semantic Structure in Deep Learning</a>.</li>



<li><a href="https://aclanthology.org/2022.starsem-1.23.pdf">Pretraining on Interactions for Learning Grounded Affordance Representations</a>.</li>



<li><a href="https://openreview.net/pdf?id=gJcEM8sxHK">Mapping Language Models to Grounded Conceptual Spaces</a>.</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
2:34 - Will LLMs make us dumb?
9:01 - Evolution of language
17:10 - Changing views on language
22:39 - Semantics, grounding, meaning
37:40 - LLMs, humans, and prediction
41:19 - How to evaluate LLMs
51:08 - Structure, semantics, and symbols in models
1:00:08 - Dimensionality
1:02:08 - Limitations of LLMs
1:07:47 - What do linguists think?
1:14:23 - What is language for?</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about whats missing in AI and Neuroscience











Ellie Pavlick runs her Language Understanding and Representation Lab at Brown]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>







<p>Ellie Pavlick runs her <a href="https://lunar.cs.brown.edu/#">Language Understanding and Representation Lab</a> at Brown University, where she studies lots of topics related to language. In AI, large language models, sometimes called foundation models, are all the rage these days, with their ability to generate convincing language, although they still make plenty of mistakes. One of the things Ellie is interested in is how these models work, what kinds of representations are being generated in them to produce the language they produce. So we discuss how she's going about studying these models. For example, probing them to see whether something symbolic-like might be implemented in the models, even though they are the deep learning neural network type, which aren't suppose to be able to work in a symbol-like manner. We also discuss whether grounding is required for language understanding - that is, whether a model that produces language well needs to connect with the real world to actually understand the text it generates. We talk about what language is for, the current limitations of large language models, how the models compare to humans, and a lot more.</p>



<ul class="wp-block-list">
<li><a href="https://lunar.cs.brown.edu/#">Language Understanding and Representation Lab</a></li>



<li>Twitter:&nbsp;<a href="https://twitter.com/brown_nlp?lang=en">@Brown_NLP</a></li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.annualreviews.org/doi/pdf/10.1146/annurev-linguistics-031120-122924">Semantic Structure in Deep Learning</a>.</li>



<li><a href="https://aclanthology.org/2022.starsem-1.23.pdf">Pretraining on Interactions for Learning Grounded Affordance Representations</a>.</li>



<li><a href="https://openreview.net/pdf?id=gJcEM8sxHK">Mapping Language Models to Grounded Conceptual Spaces</a>.</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
2:34 - Will LLMs make us dumb?
9:01 - Evolution of language
17:10 - Changing views on language
22:39 - Semantics, grounding, meaning
37:40 - LLMs, humans, and prediction
41:19 - How to evaluate LLMs
51:08 - Structure, semantics, and symbols in models
1:00:08 - Dimensionality
1:02:08 - Limitations of LLMs
1:07:47 - What do linguists think?
1:14:23 - What is language for?</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2182/163.mp3" length="78609455" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience











Ellie Pavlick runs her Language Understanding and Representation Lab at Brown University, where she studies lots of topics related to language. In AI, large language models, sometimes called foundation models, are all the rage these days, with their ability to generate convincing language, although they still make plenty of mistakes. One of the things Ellie is interested in is how these models work, what kinds of representations are being generated in them to produce the language they produce. So we discuss how she's going about studying these models. For example, probing them to see whether something symbolic-like might be implemented in the models, even though they are the deep learning neural network type, which aren't suppose to be able to work in a symbol-like manner. We also discuss whether grounding is required for language understanding - that is, whether a model that produces language well needs to connect with the real world to actually understand the text it generates. We talk about what language is for, the current limitations of large language models, how the models compare to humans, and a lot more.




Language Understanding and Representation Lab



Twitter:&nbsp;@Brown_NLP



Related papers

Semantic Structure in Deep Learning.



Pretraining on Interactions for Learning Grounded Affordance Representations.



Mapping Language Models to Grounded Conceptual Spaces.






0:00 - Intro
2:34 - Will LLMs make us dumb?
9:01 - Evolution of language
17:10 - Changing views on language
22:39 - Semantics, grounding, meaning
37:40 - LLMs, humans, and prediction
41:19 - How to evaluate LLMs
51:08 - Structure, semantics, and symbols in models
1:00:08 - Dimensionality
1:02:08 - Limitations of LLMs
1:07:47 - What do linguists think?
1:14:23 - What is language for?]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2023/03/thumb-website-1.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:21:34</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience











Ellie Pavlick runs her Language Understanding and Representation Lab at Brown University, where she studies lots of topics related to language. In AI, large language models, sometimes called foundation models, are all the rage these days, with their ability to generate convincing language, although they still make plenty of mistakes. One of the things Ellie is interested in is how these models work, what kinds of representations are being generated in them to produce the language they produce. So we discuss how she's going about studying these models. For example, probing them to see whether something symbolic-like might be implemented in the models, even though they are the deep learning neural network type, which aren't suppose to be able to work in a symbol-like manner. We also discuss whether grounding is ]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2023/03/thumb-website-1.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 162 Earl K. Miller: Thoughts are an Emergent Property</title>
	<link>https://braininspired.co/podcast/162/</link>
	<pubDate>Wed, 08 Mar 2023 16:44:21 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2176</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>







<p>Earl Miller runs the Miller Lab at MIT, where he studies how our brains carry out our executive functions, like working memory, attention, and decision-making. In particular he is interested in the role of the prefrontal cortex and how it coordinates with other brain areas to carry out these functions. During this episode, we talk broadly about how neuroscience has changed during Earl's career, and how his own thoughts have changed. One thing we focus on is the increasing appreciation of brain oscillations for our cognition.</p>



<p>Recently on BI we've discussed oscillations quite a bit. In <a href="https://braininspired.co/podcast/153/">episode 153</a>, Carolyn Dicey-Jennings discussed her philosophical ideas relating attention to the notion of the self, and she leans a lot on Earl's research to make that argument.&nbsp; In <a href="https://braininspired.co/podcast/160/">episode 160</a>, Ole Jensen discussed his work in humans showing that&nbsp; low frequency oscillations exert a top-down control on incoming sensory stimuli, and this is directly in agreement with Earl's work over many years in nonhuman primates. So we continue that discussion relating low-frequency oscillations to executive control. We also discuss a new concept Earl has developed called spatial computing, which is an account of how brain oscillations can dictate where in various brain areas neural activity be on or off, and hence contribute or not to ongoing mental function. We also discuss working memory in particular, and a host of related topics.</p>



<ul class="wp-block-list">
<li><a href="http://ekmillerlab.mit.edu/">Miller lab.</a></li>



<li>Twitter:&nbsp;<a href="https://twitter.com/MillerLabMIT">@MillerLabMIT</a>.</li>



<li>Related papers:
<ul class="wp-block-list">
<li><a href="https://ekmillerlab.mit.edu/wp-content/uploads/2013/03/Miller-Cohen-20011.pdf">An integrative theory of prefrontal cortex function. Annual Review of Neuroscience</a>.</li>



<li><a href="https://ekmillerlab.mit.edu/wp-content/uploads/2022/12/Buchman-and-Miller-JOCN-2023.pdf">Working Memory Is Complex and Dynamic, Like Your Thoughts</a>.</li>



<li><a href="https://ekmillerlab.mit.edu/wp-content/uploads/2022/02/Traveling-Waves-PLOS-Comp-Bio-2022.pdf">Traveling waves in the prefrontal cortex during working memory</a>.</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
6:22 - Evolution of Earl's thinking
14:58 - Role of the prefrontal cortex
25:21 - Spatial computing
32:51 - Homunculus problem
35:34 - Self
37:40 - Dimensionality and thought
46:13 - Reductionism
47:38 - Working memory and capacity
1:01:45 - Capacity as a principle
1:05:44 - Silent synapses
1:10:16 - Subspaces in dynamics</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about whats missing in AI and Neuroscience











Earl Miller runs the Miller Lab at MIT, where he studies how our brains carry]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>







<p>Earl Miller runs the Miller Lab at MIT, where he studies how our brains carry out our executive functions, like working memory, attention, and decision-making. In particular he is interested in the role of the prefrontal cortex and how it coordinates with other brain areas to carry out these functions. During this episode, we talk broadly about how neuroscience has changed during Earl's career, and how his own thoughts have changed. One thing we focus on is the increasing appreciation of brain oscillations for our cognition.</p>



<p>Recently on BI we've discussed oscillations quite a bit. In <a href="https://braininspired.co/podcast/153/">episode 153</a>, Carolyn Dicey-Jennings discussed her philosophical ideas relating attention to the notion of the self, and she leans a lot on Earl's research to make that argument.&nbsp; In <a href="https://braininspired.co/podcast/160/">episode 160</a>, Ole Jensen discussed his work in humans showing that&nbsp; low frequency oscillations exert a top-down control on incoming sensory stimuli, and this is directly in agreement with Earl's work over many years in nonhuman primates. So we continue that discussion relating low-frequency oscillations to executive control. We also discuss a new concept Earl has developed called spatial computing, which is an account of how brain oscillations can dictate where in various brain areas neural activity be on or off, and hence contribute or not to ongoing mental function. We also discuss working memory in particular, and a host of related topics.</p>



<ul class="wp-block-list">
<li><a href="http://ekmillerlab.mit.edu/">Miller lab.</a></li>



<li>Twitter:&nbsp;<a href="https://twitter.com/MillerLabMIT">@MillerLabMIT</a>.</li>



<li>Related papers:
<ul class="wp-block-list">
<li><a href="https://ekmillerlab.mit.edu/wp-content/uploads/2013/03/Miller-Cohen-20011.pdf">An integrative theory of prefrontal cortex function. Annual Review of Neuroscience</a>.</li>



<li><a href="https://ekmillerlab.mit.edu/wp-content/uploads/2022/12/Buchman-and-Miller-JOCN-2023.pdf">Working Memory Is Complex and Dynamic, Like Your Thoughts</a>.</li>



<li><a href="https://ekmillerlab.mit.edu/wp-content/uploads/2022/02/Traveling-Waves-PLOS-Comp-Bio-2022.pdf">Traveling waves in the prefrontal cortex during working memory</a>.</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
6:22 - Evolution of Earl's thinking
14:58 - Role of the prefrontal cortex
25:21 - Spatial computing
32:51 - Homunculus problem
35:34 - Self
37:40 - Dimensionality and thought
46:13 - Reductionism
47:38 - Working memory and capacity
1:01:45 - Capacity as a principle
1:05:44 - Silent synapses
1:10:16 - Subspaces in dynamics</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2176/162.mp3" length="80413787" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience











Earl Miller runs the Miller Lab at MIT, where he studies how our brains carry out our executive functions, like working memory, attention, and decision-making. In particular he is interested in the role of the prefrontal cortex and how it coordinates with other brain areas to carry out these functions. During this episode, we talk broadly about how neuroscience has changed during Earl's career, and how his own thoughts have changed. One thing we focus on is the increasing appreciation of brain oscillations for our cognition.



Recently on BI we've discussed oscillations quite a bit. In episode 153, Carolyn Dicey-Jennings discussed her philosophical ideas relating attention to the notion of the self, and she leans a lot on Earl's research to make that argument.&nbsp; In episode 160, Ole Jensen discussed his work in humans showing that&nbsp; low frequency oscillations exert a top-down control on incoming sensory stimuli, and this is directly in agreement with Earl's work over many years in nonhuman primates. So we continue that discussion relating low-frequency oscillations to executive control. We also discuss a new concept Earl has developed called spatial computing, which is an account of how brain oscillations can dictate where in various brain areas neural activity be on or off, and hence contribute or not to ongoing mental function. We also discuss working memory in particular, and a host of related topics.




Miller lab.



Twitter:&nbsp;@MillerLabMIT.



Related papers:

An integrative theory of prefrontal cortex function. Annual Review of Neuroscience.



Working Memory Is Complex and Dynamic, Like Your Thoughts.



Traveling waves in the prefrontal cortex during working memory.






0:00 - Intro
6:22 - Evolution of Earl's thinking
14:58 - Role of the prefrontal cortex
25:21 - Spatial computing
32:51 - Homunculus problem
35:34 - Self
37:40 - Dimensionality and thought
46:13 - Reductionism
47:38 - Working memory and capacity
1:01:45 - Capacity as a principle
1:05:44 - Silent synapses
1:10:16 - Subspaces in dynamics]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2023/03/thumb-website.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:23:27</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience











Earl Miller runs the Miller Lab at MIT, where he studies how our brains carry out our executive functions, like working memory, attention, and decision-making. In particular he is interested in the role of the prefrontal cortex and how it coordinates with other brain areas to carry out these functions. During this episode, we talk broadly about how neuroscience has changed during Earl's career, and how his own thoughts have changed. One thing we focus on is the increasing appreciation of brain oscillations for our cognition.



Recently on BI we've discussed oscillations quite a bit. In episode 153, Carolyn Dicey-Jennings discussed her philosophical ideas relating attention to the notion of the self, and she leans a lot on Earl's research to make that argument.&nbsp; In episode 160, Ole Jensen discussed his wo]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2023/03/thumb-website.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 161 Hugo Spiers: Navigation and Spatial Cognition</title>
	<link>https://braininspired.co/podcast/161/</link>
	<pubDate>Fri, 24 Feb 2023 15:32:08 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2169</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>





<p>Hugo Spiers runs the Spiers Lab at University College London. In general Hugo is interested in understanding spatial cognition, like navigation, in relation to other processes like planning and goal-related behavior, and how brain areas like the hippocampus and prefrontal cortex coordinate these cognitive functions. So, in this episode, we discuss a range of his research and thoughts around those topics. You may have heard about the studies he's been involved with for years, regarding London taxi drivers and how their hippocampus changes as a result of their grueling efforts to memorize how to best navigate London. We talk about that, we discuss the concept of a schema, which is roughly an abstracted form of knowledge that helps you know how to behave in different environments. Probably the most common example is that we all have a schema for eating at a restaurant, independent of which restaurant we visit, we know about servers, and menus, and so on. Hugo is interested in spatial schemas, for things like navigating a new city you haven't visited. Hugo describes his work using reinforcement learning methods to compare how humans and animals solve navigation tasks. And finally we talk about the video game Hugo has been using to collect vast amount of data related to navigation, to answer questions like how our navigation ability changes over our lifetimes, the different factors that seem to matter more for our navigation skills, and so on.</p>



<ul class="wp-block-list">
<li><a href="https://spierslab.com/">Spiers Lab</a>.</li>



<li>Twitter:&nbsp;<a href="https://twitter.com/hugospiers">@hugospiers</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.biorxiv.org/content/10.1101/2020.09.26.314815v6">Predictive maps in rats and humans for spatial navigation</a>.</li>



<li><a href="https://www.researchgate.net/publication/365657477_From_cognitive_maps_to_spatial_schemas">From cognitive maps to spatial schemas</a>.</li>



<li><a href="https://onlinelibrary.wiley.com/doi/full/10.1002/hipo.23395">London taxi drivers: A review of neurocognitive studies and an exploration of how they build their cognitive map of London</a>.</li>



<li><a href="https://hal.science/hal-03472319/file/SpiersTICS2022.pdf">Explaining World-Wide Variation in Navigation Ability from Millions of People: Citizen Science Project Sea Hero Quest</a>.</li>
</ul>
</li>
</ul>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about whats missing in AI and Neuroscience









Hugo Spiers runs the Spiers Lab at University College London. In general Hugo i]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>





<p>Hugo Spiers runs the Spiers Lab at University College London. In general Hugo is interested in understanding spatial cognition, like navigation, in relation to other processes like planning and goal-related behavior, and how brain areas like the hippocampus and prefrontal cortex coordinate these cognitive functions. So, in this episode, we discuss a range of his research and thoughts around those topics. You may have heard about the studies he's been involved with for years, regarding London taxi drivers and how their hippocampus changes as a result of their grueling efforts to memorize how to best navigate London. We talk about that, we discuss the concept of a schema, which is roughly an abstracted form of knowledge that helps you know how to behave in different environments. Probably the most common example is that we all have a schema for eating at a restaurant, independent of which restaurant we visit, we know about servers, and menus, and so on. Hugo is interested in spatial schemas, for things like navigating a new city you haven't visited. Hugo describes his work using reinforcement learning methods to compare how humans and animals solve navigation tasks. And finally we talk about the video game Hugo has been using to collect vast amount of data related to navigation, to answer questions like how our navigation ability changes over our lifetimes, the different factors that seem to matter more for our navigation skills, and so on.</p>



<ul class="wp-block-list">
<li><a href="https://spierslab.com/">Spiers Lab</a>.</li>



<li>Twitter:&nbsp;<a href="https://twitter.com/hugospiers">@hugospiers</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.biorxiv.org/content/10.1101/2020.09.26.314815v6">Predictive maps in rats and humans for spatial navigation</a>.</li>



<li><a href="https://www.researchgate.net/publication/365657477_From_cognitive_maps_to_spatial_schemas">From cognitive maps to spatial schemas</a>.</li>



<li><a href="https://onlinelibrary.wiley.com/doi/full/10.1002/hipo.23395">London taxi drivers: A review of neurocognitive studies and an exploration of how they build their cognitive map of London</a>.</li>



<li><a href="https://hal.science/hal-03472319/file/SpiersTICS2022.pdf">Explaining World-Wide Variation in Navigation Ability from Millions of People: Citizen Science Project Sea Hero Quest</a>.</li>
</ul>
</li>
</ul>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2169/161.mp3" length="91146289" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience









Hugo Spiers runs the Spiers Lab at University College London. In general Hugo is interested in understanding spatial cognition, like navigation, in relation to other processes like planning and goal-related behavior, and how brain areas like the hippocampus and prefrontal cortex coordinate these cognitive functions. So, in this episode, we discuss a range of his research and thoughts around those topics. You may have heard about the studies he's been involved with for years, regarding London taxi drivers and how their hippocampus changes as a result of their grueling efforts to memorize how to best navigate London. We talk about that, we discuss the concept of a schema, which is roughly an abstracted form of knowledge that helps you know how to behave in different environments. Probably the most common example is that we all have a schema for eating at a restaurant, independent of which restaurant we visit, we know about servers, and menus, and so on. Hugo is interested in spatial schemas, for things like navigating a new city you haven't visited. Hugo describes his work using reinforcement learning methods to compare how humans and animals solve navigation tasks. And finally we talk about the video game Hugo has been using to collect vast amount of data related to navigation, to answer questions like how our navigation ability changes over our lifetimes, the different factors that seem to matter more for our navigation skills, and so on.




Spiers Lab.



Twitter:&nbsp;@hugospiers.



Related papers

Predictive maps in rats and humans for spatial navigation.



From cognitive maps to spatial schemas.



London taxi drivers: A review of neurocognitive studies and an exploration of how they build their cognitive map of London.



Explaining World-Wide Variation in Navigation Ability from Millions of People: Citizen Science Project Sea Hero Quest.]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2023/02/thumb-website-1.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:34:38</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience









Hugo Spiers runs the Spiers Lab at University College London. In general Hugo is interested in understanding spatial cognition, like navigation, in relation to other processes like planning and goal-related behavior, and how brain areas like the hippocampus and prefrontal cortex coordinate these cognitive functions. So, in this episode, we discuss a range of his research and thoughts around those topics. You may have heard about the studies he's been involved with for years, regarding London taxi drivers and how their hippocampus changes as a result of their grueling efforts to memorize how to best navigate London. We talk about that, we discuss the concept of a schema, which is roughly an abstracted form of knowledge that helps you know how to behave in different environments. Probably the most common example i]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2023/02/thumb-website-1.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 160 Ole Jensen: Rhythms of Cognition</title>
	<link>https://braininspired.co/podcast/160/</link>
	<pubDate>Tue, 07 Feb 2023 16:08:37 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2163</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>







<p>Ole Jensen is co-director of the Centre for Human Brain Health at University of Birmingham, where he runs his Neuronal Oscillations Group lab. Ole is interested in how the oscillations in our brains affect our cognition by helping to shape the spiking patterns of neurons, and by helping to allocate resources to parts of our brains that are relevant for whatever ongoing behaviors we're performing in different contexts. People have been studying oscillations for decades, finding that different frequencies of oscillations have been linked to a bunch of different cognitive functions. Some of what we discuss today is Ole's work on alpha oscillations, which are around 10 hertz, so 10 oscillations per second. The overarching story is that alpha oscillations are thought to inhibit or disrupt processing in brain areas that aren't needed during a given behavior. And therefore by disrupting everything that's not needed, resources are allocated to the brain areas that are needed. We discuss his work in the vein on attention - you may remember the episode with Carolyn Dicey-Jennings, and her ideas about how findings like Ole's are evidence we all have selves. We also talk about the role of alpha rhythms for working memory, for moving our eyes, and for previewing what we're about to look at before we move our eyes, and more broadly we discuss the role of oscillations in cognition in general, and of course what this might mean for developing better artificial intelligence.</p>



<ul class="wp-block-list">
<li><a href="https://neuosc.com/">The Neuronal Oscillations Group</a>.
<ul class="wp-block-list">
<li></li>
</ul>
</li>



<li>Twitter:&nbsp;<a href="https://twitter.com/neuosc">@neuosc</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.frontiersin.org/articles/10.3389/fnhum.2010.00186/full">Shaping functional architecture by oscillatory alpha activity: gating by inhibition</a></li>



<li><a href="https://www.jneurosci.org/content/37/15/4117">FEF-Controlled Alpha Delay Activity Precedes Stimulus-Induced Gamma-Band Activity in Visual Cortex</a></li>



<li><a href="https://www.cell.com/neuron/fulltext/S0896-6273(13)00231-6">The theta-gamma neural code</a></li>



<li><a href="https://www.biorxiv.org/content/biorxiv/early/2021/03/26/2021.03.25.436919.full.pdf">A pipelining mechanism supporting previewing during visual exploration and reading.</a></li>



<li><a href="https://elifesciences.org/articles/39061">Specific lexico-semantic predictions are associated with unique spatial and temporal patterns of neural activity.</a></li>
</ul>
</li>
</ul>



<p>0:00 - Intro
2:58 - Oscillations import over the years
5:51 - Oscillations big picture
17:62 - Oscillations vs. traveling waves
22:00 - Oscillations and algorithms
28:53 - Alpha oscillations and working memory
44:46 - Alpha as the controller
48:55 - Frequency tagging
52:49 - Timing of attention
57:41 - Pipelining neural processing
1:03:38 - Previewing during reading
1:15:50 - Previewing, prediction, and large language models
1:24:27 - Dyslexia</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about whats missing in AI and Neuroscience











Ole Jensen is co-director of the Centre for Human Brain Health at University ]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>







<p>Ole Jensen is co-director of the Centre for Human Brain Health at University of Birmingham, where he runs his Neuronal Oscillations Group lab. Ole is interested in how the oscillations in our brains affect our cognition by helping to shape the spiking patterns of neurons, and by helping to allocate resources to parts of our brains that are relevant for whatever ongoing behaviors we're performing in different contexts. People have been studying oscillations for decades, finding that different frequencies of oscillations have been linked to a bunch of different cognitive functions. Some of what we discuss today is Ole's work on alpha oscillations, which are around 10 hertz, so 10 oscillations per second. The overarching story is that alpha oscillations are thought to inhibit or disrupt processing in brain areas that aren't needed during a given behavior. And therefore by disrupting everything that's not needed, resources are allocated to the brain areas that are needed. We discuss his work in the vein on attention - you may remember the episode with Carolyn Dicey-Jennings, and her ideas about how findings like Ole's are evidence we all have selves. We also talk about the role of alpha rhythms for working memory, for moving our eyes, and for previewing what we're about to look at before we move our eyes, and more broadly we discuss the role of oscillations in cognition in general, and of course what this might mean for developing better artificial intelligence.</p>



<ul class="wp-block-list">
<li><a href="https://neuosc.com/">The Neuronal Oscillations Group</a>.
<ul class="wp-block-list">
<li></li>
</ul>
</li>



<li>Twitter:&nbsp;<a href="https://twitter.com/neuosc">@neuosc</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.frontiersin.org/articles/10.3389/fnhum.2010.00186/full">Shaping functional architecture by oscillatory alpha activity: gating by inhibition</a></li>



<li><a href="https://www.jneurosci.org/content/37/15/4117">FEF-Controlled Alpha Delay Activity Precedes Stimulus-Induced Gamma-Band Activity in Visual Cortex</a></li>



<li><a href="https://www.cell.com/neuron/fulltext/S0896-6273(13)00231-6">The theta-gamma neural code</a></li>



<li><a href="https://www.biorxiv.org/content/biorxiv/early/2021/03/26/2021.03.25.436919.full.pdf">A pipelining mechanism supporting previewing during visual exploration and reading.</a></li>



<li><a href="https://elifesciences.org/articles/39061">Specific lexico-semantic predictions are associated with unique spatial and temporal patterns of neural activity.</a></li>
</ul>
</li>
</ul>



<p>0:00 - Intro
2:58 - Oscillations import over the years
5:51 - Oscillations big picture
17:62 - Oscillations vs. traveling waves
22:00 - Oscillations and algorithms
28:53 - Alpha oscillations and working memory
44:46 - Alpha as the controller
48:55 - Frequency tagging
52:49 - Timing of attention
57:41 - Pipelining neural processing
1:03:38 - Previewing during reading
1:15:50 - Previewing, prediction, and large language models
1:24:27 - Dyslexia</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2163/160.mp3" length="85409466" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience











Ole Jensen is co-director of the Centre for Human Brain Health at University of Birmingham, where he runs his Neuronal Oscillations Group lab. Ole is interested in how the oscillations in our brains affect our cognition by helping to shape the spiking patterns of neurons, and by helping to allocate resources to parts of our brains that are relevant for whatever ongoing behaviors we're performing in different contexts. People have been studying oscillations for decades, finding that different frequencies of oscillations have been linked to a bunch of different cognitive functions. Some of what we discuss today is Ole's work on alpha oscillations, which are around 10 hertz, so 10 oscillations per second. The overarching story is that alpha oscillations are thought to inhibit or disrupt processing in brain areas that aren't needed during a given behavior. And therefore by disrupting everything that's not needed, resources are allocated to the brain areas that are needed. We discuss his work in the vein on attention - you may remember the episode with Carolyn Dicey-Jennings, and her ideas about how findings like Ole's are evidence we all have selves. We also talk about the role of alpha rhythms for working memory, for moving our eyes, and for previewing what we're about to look at before we move our eyes, and more broadly we discuss the role of oscillations in cognition in general, and of course what this might mean for developing better artificial intelligence.




The Neuronal Oscillations Group.







Twitter:&nbsp;@neuosc.



Related papers

Shaping functional architecture by oscillatory alpha activity: gating by inhibition



FEF-Controlled Alpha Delay Activity Precedes Stimulus-Induced Gamma-Band Activity in Visual Cortex



The theta-gamma neural code



A pipelining mechanism supporting previewing during visual exploration and reading.



Specific lexico-semantic predictions are associated with unique spatial and temporal patterns of neural activity.






0:00 - Intro
2:58 - Oscillations import over the years
5:51 - Oscillations big picture
17:62 - Oscillations vs. traveling waves
22:00 - Oscillations and algorithms
28:53 - Alpha oscillations and working memory
44:46 - Alpha as the controller
48:55 - Frequency tagging
52:49 - Timing of attention
57:41 - Pipelining neural processing
1:03:38 - Previewing during reading
1:15:50 - Previewing, prediction, and large language models
1:24:27 - Dyslexia]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2023/02/thumb-website.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:28:39</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience











Ole Jensen is co-director of the Centre for Human Brain Health at University of Birmingham, where he runs his Neuronal Oscillations Group lab. Ole is interested in how the oscillations in our brains affect our cognition by helping to shape the spiking patterns of neurons, and by helping to allocate resources to parts of our brains that are relevant for whatever ongoing behaviors we're performing in different contexts. People have been studying oscillations for decades, finding that different frequencies of oscillations have been linked to a bunch of different cognitive functions. Some of what we discuss today is Ole's work on alpha oscillations, which are around 10 hertz, so 10 oscillations per second. The overarching story is that alpha oscillations are thought to inhibit or disrupt processing in brain areas ]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2023/02/thumb-website.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 159 Chris Summerfield: Natural General Intelligence</title>
	<link>https://braininspired.co/podcast/159/</link>
	<pubDate>Thu, 26 Jan 2023 23:18:14 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2143</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>







<p>Chris Summerfield runs the Human Information Processing Lab at University of Oxford, and he's a research scientist at Deepmind. You may remember him from <a href="https://braininspired.co/podcast/95/">episode 95 with Sam Gershman</a>, when we discussed ideas around the usefulness of neuroscience and psychology for AI. Since then, Chris has released his book, <a href="https://amzn.to/3GIY9tO">Natural General Intelligence: How understanding the brain can help us build AI</a>. In the book, Chris makes the case that inspiration and communication between the cognitive sciences and AI is hindered by the different languages each field speaks. But in reality, there has always been and still is a lot of overlap and convergence about ideas of computation and intelligence, and he illustrates this using tons of historical and modern examples.</p>





<ul class="wp-block-list">
<li><a href="https://humaninformationprocessing.com/">Human Information Processing Lab</a>.</li>



<li>Twitter:&nbsp;<a href="https://twitter.com/summerfieldlab">@summerfieldlab</a>.</li>



<li>Book: <a href="https://amzn.to/3GIY9tO">Natural General Intelligence: How understanding the brain can help us build AI</a>.</li>



<li>Other books mentioned:
<ul class="wp-block-list">
<li><a href="https://amzn.to/3kC1FPu">Are We Smart Enough to Know How Smart Animals Are?</a> by Frans de Waal</li>



<li><a href="https://amzn.to/3ZQi96x">The Mind is Flat</a> by Nick Chater.</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
2:20 - Natural General Intelligence
8:05 - AI and Neuro interaction
21:42 - How to build AI
25:54 - Umwelts and affordances
32:07 - Different kind of intelligence
39:16 - Ecological validity and AI
48:30 - Is reward enough?
1:05:14 - Beyond brains
1:15:10 - Large language models and brains</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about whats missing in AI and Neuroscience











Chris Summerfield runs the Human Information Processing Lab at University of ]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>







<p>Chris Summerfield runs the Human Information Processing Lab at University of Oxford, and he's a research scientist at Deepmind. You may remember him from <a href="https://braininspired.co/podcast/95/">episode 95 with Sam Gershman</a>, when we discussed ideas around the usefulness of neuroscience and psychology for AI. Since then, Chris has released his book, <a href="https://amzn.to/3GIY9tO">Natural General Intelligence: How understanding the brain can help us build AI</a>. In the book, Chris makes the case that inspiration and communication between the cognitive sciences and AI is hindered by the different languages each field speaks. But in reality, there has always been and still is a lot of overlap and convergence about ideas of computation and intelligence, and he illustrates this using tons of historical and modern examples.</p>





<ul class="wp-block-list">
<li><a href="https://humaninformationprocessing.com/">Human Information Processing Lab</a>.</li>



<li>Twitter:&nbsp;<a href="https://twitter.com/summerfieldlab">@summerfieldlab</a>.</li>



<li>Book: <a href="https://amzn.to/3GIY9tO">Natural General Intelligence: How understanding the brain can help us build AI</a>.</li>



<li>Other books mentioned:
<ul class="wp-block-list">
<li><a href="https://amzn.to/3kC1FPu">Are We Smart Enough to Know How Smart Animals Are?</a> by Frans de Waal</li>



<li><a href="https://amzn.to/3ZQi96x">The Mind is Flat</a> by Nick Chater.</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
2:20 - Natural General Intelligence
8:05 - AI and Neuro interaction
21:42 - How to build AI
25:54 - Umwelts and affordances
32:07 - Different kind of intelligence
39:16 - Ecological validity and AI
48:30 - Is reward enough?
1:05:14 - Beyond brains
1:15:10 - Large language models and brains</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2143/159.mp3" length="85633753" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience











Chris Summerfield runs the Human Information Processing Lab at University of Oxford, and he's a research scientist at Deepmind. You may remember him from episode 95 with Sam Gershman, when we discussed ideas around the usefulness of neuroscience and psychology for AI. Since then, Chris has released his book, Natural General Intelligence: How understanding the brain can help us build AI. In the book, Chris makes the case that inspiration and communication between the cognitive sciences and AI is hindered by the different languages each field speaks. But in reality, there has always been and still is a lot of overlap and convergence about ideas of computation and intelligence, and he illustrates this using tons of historical and modern examples.






Human Information Processing Lab.



Twitter:&nbsp;@summerfieldlab.



Book: Natural General Intelligence: How understanding the brain can help us build AI.



Other books mentioned:

Are We Smart Enough to Know How Smart Animals Are? by Frans de Waal



The Mind is Flat by Nick Chater.






0:00 - Intro
2:20 - Natural General Intelligence
8:05 - AI and Neuro interaction
21:42 - How to build AI
25:54 - Umwelts and affordances
32:07 - Different kind of intelligence
39:16 - Ecological validity and AI
48:30 - Is reward enough?
1:05:14 - Beyond brains
1:15:10 - Large language models and brains]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2023/01/thumb-website-1.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:28:53</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience











Chris Summerfield runs the Human Information Processing Lab at University of Oxford, and he's a research scientist at Deepmind. You may remember him from episode 95 with Sam Gershman, when we discussed ideas around the usefulness of neuroscience and psychology for AI. Since then, Chris has released his book, Natural General Intelligence: How understanding the brain can help us build AI. In the book, Chris makes the case that inspiration and communication between the cognitive sciences and AI is hindered by the different languages each field speaks. But in reality, there has always been and still is a lot of overlap and convergence about ideas of computation and intelligence, and he illustrates this using tons of historical and modern examples.






Human Information Processing Lab.



Twitter:&nbsp;@summerfie]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2023/01/thumb-website-1.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 158 Paul Rosenbloom: Cognitive Architectures</title>
	<link>https://braininspired.co/podcast/158/</link>
	<pubDate>Mon, 16 Jan 2023 13:50:23 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2135</guid>
	<description><![CDATA[<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Paul Rosenbloom is Professor Emeritus of Computer Science at the University of Southern California. In the early 1980s, Paul , along with John Laird and the early AI pioneer Alan Newell, developed one the earliest and best know cognitive architectures called SOAR. A cognitive architecture, as Paul defines it, is a model of the fixed structures and processes underlying minds, and in Paul's case the human mind. And SOAR was aimed at generating general intelligence. He doesn't work on SOAR any more, although SOAR is still alive and well in the hands of his old partner John Laird. He did go on to develop another cognitive architecture, called Sigma, and in the intervening years between those projects, among other things Paul stepped back and explored how our various scientific domains are related, and how computing itself should be considered a great scientific domain. That's in his book <a href="https://amzn.to/3WojDlL">On Computing: The Fourth Great Scientific Domain</a>.</p>





<p>He also helped develop the Common Model of Cognition, which isn't a cognitive architecture itself, but instead a theoretical model meant to generate consensus regarding the minimal components for a human-like mind. The idea is roughly to create a shared language and framework among cognitive architecture researchers, so the field can , so that whatever cognitive architecture you work on, you have a basis to compare it to, and can communicate effectively among your peers.</p>



<p>All of what I just said, and much of what we discuss, can be found in Paul's memoir, <a href="https://www.dropbox.com/s/5o39z7gj2n1utg3/The%20Search%20for%20Insight%207-31-22.pdf?dl=0" target="_blank" rel="noreferrer noopener">In Search of Insight: My Life as an Architectural Explorer</a>.</p>



<ul class="wp-block-list">
<li><a href="https://sites.usc.edu/rosenbloom/">Paul's website</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li>Working memoir: <a rel="noreferrer noopener" href="https://www.dropbox.com/s/5o39z7gj2n1utg3/The%20Search%20for%20Insight%207-31-22.pdf?dl=0" target="_blank">In Search of Insight: My Life as an Architectural Explorer</a>.</li>



<li>Book: <a href="https://amzn.to/3WojDlL">On Computing: The Fourth Great Scientific Domain</a>.</li>



<li><a href="https://soar.eecs.umich.edu/pubs/Laird_etal_StandardModel_AImag_2018.pdf">A Standard Model of the Mind: Toward a Common Computational Framework across Artificial Intelligence, Cognitive Science, Neuroscience, and Robotics</a>.</li>



<li><a href="https://soar.eecs.umich.edu/pubs/stocco2021connectome.pdf">Analysis of the human connectome data supports the notion of a “Common Model of Cognition” for human and human-like intelligence across domains</a>.</li>



<li><a href="https://ojs.library.carleton.ca/index.php/cmcb/index">Common Model of Cognition Bulletin</a>.</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
3:26 - A career of exploration
7:00 - Alan Newell
14:47 - Relational model and dichotomic maps
24:22 - Cognitive architectures
28:31 - SOAR cognitive architecture
41:14 - Sigma cognitive architecture
43:58 - SOAR vs. Sigma
53:06 - Cognitive architecture community
55:31 - Common model of cognition
1:11:13 - What's missing from the common model
1:17:48 - Brains vs. cognitive architectures
1:21:22 - Mapping the common model onto the brain
1:24:50 - Deep learning
1:30:23 - AGI</p>]]></description>
	<itunes:subtitle><![CDATA[Check out my free video series about whats missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.









Paul Rosenbloom is Professor Emeritus of Computer Science at the University of ]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Paul Rosenbloom is Professor Emeritus of Computer Science at the University of Southern California. In the early 1980s, Paul , along with John Laird and the early AI pioneer Alan Newell, developed one the earliest and best know cognitive architectures called SOAR. A cognitive architecture, as Paul defines it, is a model of the fixed structures and processes underlying minds, and in Paul's case the human mind. And SOAR was aimed at generating general intelligence. He doesn't work on SOAR any more, although SOAR is still alive and well in the hands of his old partner John Laird. He did go on to develop another cognitive architecture, called Sigma, and in the intervening years between those projects, among other things Paul stepped back and explored how our various scientific domains are related, and how computing itself should be considered a great scientific domain. That's in his book <a href="https://amzn.to/3WojDlL">On Computing: The Fourth Great Scientific Domain</a>.</p>





<p>He also helped develop the Common Model of Cognition, which isn't a cognitive architecture itself, but instead a theoretical model meant to generate consensus regarding the minimal components for a human-like mind. The idea is roughly to create a shared language and framework among cognitive architecture researchers, so the field can , so that whatever cognitive architecture you work on, you have a basis to compare it to, and can communicate effectively among your peers.</p>



<p>All of what I just said, and much of what we discuss, can be found in Paul's memoir, <a href="https://www.dropbox.com/s/5o39z7gj2n1utg3/The%20Search%20for%20Insight%207-31-22.pdf?dl=0" target="_blank" rel="noreferrer noopener">In Search of Insight: My Life as an Architectural Explorer</a>.</p>



<ul class="wp-block-list">
<li><a href="https://sites.usc.edu/rosenbloom/">Paul's website</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li>Working memoir: <a rel="noreferrer noopener" href="https://www.dropbox.com/s/5o39z7gj2n1utg3/The%20Search%20for%20Insight%207-31-22.pdf?dl=0" target="_blank">In Search of Insight: My Life as an Architectural Explorer</a>.</li>



<li>Book: <a href="https://amzn.to/3WojDlL">On Computing: The Fourth Great Scientific Domain</a>.</li>



<li><a href="https://soar.eecs.umich.edu/pubs/Laird_etal_StandardModel_AImag_2018.pdf">A Standard Model of the Mind: Toward a Common Computational Framework across Artificial Intelligence, Cognitive Science, Neuroscience, and Robotics</a>.</li>



<li><a href="https://soar.eecs.umich.edu/pubs/stocco2021connectome.pdf">Analysis of the human connectome data supports the notion of a “Common Model of Cognition” for human and human-like intelligence across domains</a>.</li>



<li><a href="https://ojs.library.carleton.ca/index.php/cmcb/index">Common Model of Cognition Bulletin</a>.</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
3:26 - A career of exploration
7:00 - Alan Newell
14:47 - Relational model and dichotomic maps
24:22 - Cognitive architectures
28:31 - SOAR cognitive architecture
41:14 - Sigma cognitive architecture
43:58 - SOAR vs. Sigma
53:06 - Cognitive architecture community
55:31 - Common model of cognition
1:11:13 - What's missing from the common model
1:17:48 - Brains vs. cognitive architectures
1:21:22 - Mapping the common model onto the brain
1:24:50 - Deep learning
1:30:23 - AGI</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2135/158.mp3" length="91689946" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Check out my free video series about what's missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.









Paul Rosenbloom is Professor Emeritus of Computer Science at the University of Southern California. In the early 1980s, Paul , along with John Laird and the early AI pioneer Alan Newell, developed one the earliest and best know cognitive architectures called SOAR. A cognitive architecture, as Paul defines it, is a model of the fixed structures and processes underlying minds, and in Paul's case the human mind. And SOAR was aimed at generating general intelligence. He doesn't work on SOAR any more, although SOAR is still alive and well in the hands of his old partner John Laird. He did go on to develop another cognitive architecture, called Sigma, and in the intervening years between those projects, among other things Paul stepped back and explored how our various scientific domains are related, and how computing itself should be considered a great scientific domain. That's in his book On Computing: The Fourth Great Scientific Domain.





He also helped develop the Common Model of Cognition, which isn't a cognitive architecture itself, but instead a theoretical model meant to generate consensus regarding the minimal components for a human-like mind. The idea is roughly to create a shared language and framework among cognitive architecture researchers, so the field can , so that whatever cognitive architecture you work on, you have a basis to compare it to, and can communicate effectively among your peers.



All of what I just said, and much of what we discuss, can be found in Paul's memoir, In Search of Insight: My Life as an Architectural Explorer.




Paul's website.



Related papers

Working memoir: In Search of Insight: My Life as an Architectural Explorer.



Book: On Computing: The Fourth Great Scientific Domain.



A Standard Model of the Mind: Toward a Common Computational Framework across Artificial Intelligence, Cognitive Science, Neuroscience, and Robotics.



Analysis of the human connectome data supports the notion of a “Common Model of Cognition” for human and human-like intelligence across domains.



Common Model of Cognition Bulletin.






0:00 - Intro
3:26 - A career of exploration
7:00 - Alan Newell
14:47 - Relational model and dichotomic maps
24:22 - Cognitive architectures
28:31 - SOAR cognitive architecture
41:14 - Sigma cognitive architecture
43:58 - SOAR vs. Sigma
53:06 - Cognitive architecture community
55:31 - Common model of cognition
1:11:13 - What's missing from the common model
1:17:48 - Brains vs. cognitive architectures
1:21:22 - Mapping the common model onto the brain
1:24:50 - Deep learning
1:30:23 - AGI]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2023/01/thumb-website.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:35:12</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Check out my free video series about what's missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.









Paul Rosenbloom is Professor Emeritus of Computer Science at the University of Southern California. In the early 1980s, Paul , along with John Laird and the early AI pioneer Alan Newell, developed one the earliest and best know cognitive architectures called SOAR. A cognitive architecture, as Paul defines it, is a model of the fixed structures and processes underlying minds, and in Paul's case the human mind. And SOAR was aimed at generating general intelligence. He doesn't work on SOAR any more, although SOAR is still alive and well in the hands of his old partner John Laird. He did go on to develop another cognitive architecture, called Sigma, and in the intervening years between those projects, among other things Paul stepped back and explored how our various scientific domains are related, and how computing ]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2023/01/thumb-website.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 157 Sarah Robins: Philosophy of Memory</title>
	<link>https://braininspired.co/podcast/157/</link>
	<pubDate>Mon, 02 Jan 2023 20:32:43 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2131</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>







<p>Sarah Robins is a philosopher at the University of Kansas, one a growing handful of philosophers specializing in memory. Much of her work focuses on memory traces, which is roughly the idea that somehow our memories leave a trace in our minds. We discuss memory traces themselves and how they relate to the engram (see <a href="https://braininspired.co/podcast/126/">BI 126 Randy Gallistel: Where Is the Engram?</a>, and <a href="https://braininspired.co/podcast/127/">BI 127 Tomás Ryan: Memory, Instinct, and Forgetting</a>).</p>



<p>Psychology has divided memories into many categories - the taxonomy of memory. Sarah and I discuss how memory traces may cross-cut those categories, suggesting we may need to re-think our current ontology and taxonomy of memory.</p>



<p>We discuss a couple challenges to the idea of a stable memory trace in the brain. Neural dynamics is the notion that all our molecules and synapses are constantly changing and being recycled. Memory consolidation refers to the process of transferring our memory traces from an early unstable version to a more stable long-term version in a different part of the brain. Sarah thinks neither challenge poses a real threat to the idea</p>



<p>We also discuss the impact of optogenetics on the philosophy and neuroscience and memory, the debate about whether memory and imagination are essentially the same thing, whether memory's function is future oriented, and whether we want to build AI with our often faulty human-like memory or with perfect memory.</p>



<ul class="wp-block-list">
<li><a href="https://www.sarahkrobins.com/">Sarah's website</a>.</li>



<li>Twitter:&nbsp;<a href="https://twitter.com/SarahKRobins">@SarahKRobins</a>.</li>



<li>Related papers:
<ul class="wp-block-list">
<li>Her Memory chapter, with Felipe de Brigard, in the book <a href="https://amzn.to/3C89n9F">Mind, Cognition, and Neuroscience: A Philosophical Introduction.</a></li>



<li><a href="https://c66264d2-5fb2-4eed-b8b6-7899b2613e1c.filesusr.com/ugd/15e503_442ce387006d4c50b497b940a3eded1f.docx?dn=Memory%20and%20Optogenetic%20Intervention.docx">Memory and Optogenetic Intervention: Separating the engram from the ecphory</a>.</li>



<li><a href="https://www.cambridge.org/core/journals/philosophy-of-science/article/abs/stable-engrams-and-neural-dynamics/9945B7082CDC4EA3A45C9B458F74EB29">Stable Engrams and Neural Dynamics.</a></li>
</ul>
</li>
</ul>



<p>0:00 - Intro
4:18 - Philosophy of memory
5:10 - Making a move
6:55 - State of philosophy of memory
11:19 - Memory traces or the engram
20:44 - Taxonomy of memory
25:50 - Cognitive ontologies, neuroscience, and psychology
29:39 - Optogenetics
33:48 - Memory traces vs. neural dynamics and consolidation
40:32 - What is the boundary of a memory?
43:00 - Process philosophy and memory
45:07 - Memory vs. imagination
49:40 - Constructivist view of memory and imagination
54:05 - Is memory for the future?
58:00 - Memory errors and intelligence
1:00:42 - Memory and AI
1:06:20 - Creativity and memory errors</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about whats missing in AI and Neuroscience











Sarah Robins is a philosopher at the University of Kansas, one a growing hand]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>







<p>Sarah Robins is a philosopher at the University of Kansas, one a growing handful of philosophers specializing in memory. Much of her work focuses on memory traces, which is roughly the idea that somehow our memories leave a trace in our minds. We discuss memory traces themselves and how they relate to the engram (see <a href="https://braininspired.co/podcast/126/">BI 126 Randy Gallistel: Where Is the Engram?</a>, and <a href="https://braininspired.co/podcast/127/">BI 127 Tomás Ryan: Memory, Instinct, and Forgetting</a>).</p>



<p>Psychology has divided memories into many categories - the taxonomy of memory. Sarah and I discuss how memory traces may cross-cut those categories, suggesting we may need to re-think our current ontology and taxonomy of memory.</p>



<p>We discuss a couple challenges to the idea of a stable memory trace in the brain. Neural dynamics is the notion that all our molecules and synapses are constantly changing and being recycled. Memory consolidation refers to the process of transferring our memory traces from an early unstable version to a more stable long-term version in a different part of the brain. Sarah thinks neither challenge poses a real threat to the idea</p>



<p>We also discuss the impact of optogenetics on the philosophy and neuroscience and memory, the debate about whether memory and imagination are essentially the same thing, whether memory's function is future oriented, and whether we want to build AI with our often faulty human-like memory or with perfect memory.</p>



<ul class="wp-block-list">
<li><a href="https://www.sarahkrobins.com/">Sarah's website</a>.</li>



<li>Twitter:&nbsp;<a href="https://twitter.com/SarahKRobins">@SarahKRobins</a>.</li>



<li>Related papers:
<ul class="wp-block-list">
<li>Her Memory chapter, with Felipe de Brigard, in the book <a href="https://amzn.to/3C89n9F">Mind, Cognition, and Neuroscience: A Philosophical Introduction.</a></li>



<li><a href="https://c66264d2-5fb2-4eed-b8b6-7899b2613e1c.filesusr.com/ugd/15e503_442ce387006d4c50b497b940a3eded1f.docx?dn=Memory%20and%20Optogenetic%20Intervention.docx">Memory and Optogenetic Intervention: Separating the engram from the ecphory</a>.</li>



<li><a href="https://www.cambridge.org/core/journals/philosophy-of-science/article/abs/stable-engrams-and-neural-dynamics/9945B7082CDC4EA3A45C9B458F74EB29">Stable Engrams and Neural Dynamics.</a></li>
</ul>
</li>
</ul>



<p>0:00 - Intro
4:18 - Philosophy of memory
5:10 - Making a move
6:55 - State of philosophy of memory
11:19 - Memory traces or the engram
20:44 - Taxonomy of memory
25:50 - Cognitive ontologies, neuroscience, and psychology
29:39 - Optogenetics
33:48 - Memory traces vs. neural dynamics and consolidation
40:32 - What is the boundary of a memory?
43:00 - Process philosophy and memory
45:07 - Memory vs. imagination
49:40 - Constructivist view of memory and imagination
54:05 - Is memory for the future?
58:00 - Memory errors and intelligence
1:00:42 - Memory and AI
1:06:20 - Creativity and memory errors</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2131/157.mp3" length="78045950" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience











Sarah Robins is a philosopher at the University of Kansas, one a growing handful of philosophers specializing in memory. Much of her work focuses on memory traces, which is roughly the idea that somehow our memories leave a trace in our minds. We discuss memory traces themselves and how they relate to the engram (see BI 126 Randy Gallistel: Where Is the Engram?, and BI 127 Tomás Ryan: Memory, Instinct, and Forgetting).



Psychology has divided memories into many categories - the taxonomy of memory. Sarah and I discuss how memory traces may cross-cut those categories, suggesting we may need to re-think our current ontology and taxonomy of memory.



We discuss a couple challenges to the idea of a stable memory trace in the brain. Neural dynamics is the notion that all our molecules and synapses are constantly changing and being recycled. Memory consolidation refers to the process of transferring our memory traces from an early unstable version to a more stable long-term version in a different part of the brain. Sarah thinks neither challenge poses a real threat to the idea



We also discuss the impact of optogenetics on the philosophy and neuroscience and memory, the debate about whether memory and imagination are essentially the same thing, whether memory's function is future oriented, and whether we want to build AI with our often faulty human-like memory or with perfect memory.




Sarah's website.



Twitter:&nbsp;@SarahKRobins.



Related papers:

Her Memory chapter, with Felipe de Brigard, in the book Mind, Cognition, and Neuroscience: A Philosophical Introduction.



Memory and Optogenetic Intervention: Separating the engram from the ecphory.



Stable Engrams and Neural Dynamics.






0:00 - Intro
4:18 - Philosophy of memory
5:10 - Making a move
6:55 - State of philosophy of memory
11:19 - Memory traces or the engram
20:44 - Taxonomy of memory
25:50 - Cognitive ontologies, neuroscience, and psychology
29:39 - Optogenetics
33:48 - Memory traces vs. neural dynamics and consolidation
40:32 - What is the boundary of a memory?
43:00 - Process philosophy and memory
45:07 - Memory vs. imagination
49:40 - Constructivist view of memory and imagination
54:05 - Is memory for the future?
58:00 - Memory errors and intelligence
1:00:42 - Memory and AI
1:06:20 - Creativity and memory errors]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2023/01/thumb-1-website.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:20:59</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience











Sarah Robins is a philosopher at the University of Kansas, one a growing handful of philosophers specializing in memory. Much of her work focuses on memory traces, which is roughly the idea that somehow our memories leave a trace in our minds. We discuss memory traces themselves and how they relate to the engram (see BI 126 Randy Gallistel: Where Is the Engram?, and BI 127 Tomás Ryan: Memory, Instinct, and Forgetting).



Psychology has divided memories into many categories - the taxonomy of memory. Sarah and I discuss how memory traces may cross-cut those categories, suggesting we may need to re-think our current ontology and taxonomy of memory.



We discuss a couple challenges to the idea of a stable memory trace in the brain. Neural dynamics is the notion that all our molecules and synapses are constantly ]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2023/01/thumb-1-website.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 156 Mariam Aly: Memory, Attention, and Perception</title>
	<link>https://braininspired.co/podcast/156/</link>
	<pubDate>Fri, 23 Dec 2022 00:37:17 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2124</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>







<p>Mariam Aly runs the Aly lab at Columbia University, where she studies the interaction of memory, attention, and perception in brain regions like the hippocampus. The short story is that memory affects our perceptions, attention affects our memories, memories affect our attention, and these effects have signatures in neural activity measurements in our hippocampus and other brain areas. We discuss her experiments testing the nature of those interactions. We also discuss a particularly difficult stretch in Mariam's graduate school years, and how she now prioritizes her mental health.</p>



<ul class="wp-block-list">
<li><a href="https://www.alylab.org/">Aly Lab</a>.</li>



<li>Twitter:&nbsp;<a href="https://twitter.com/mariam_s_aly">@mariam_s_aly</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.alylab.org/_files/ugd/1d2439_764d2d599b94432090070316796bb6f8.pdf">Attention promotes episodic encoding by stabilizing hippocampal representations</a>.</li>



<li><a href="https://www.alylab.org/_files/ugd/1d2439_f834876a78cc441dbcd23831023dcb77.pdf">The medial temporal lobe is critical for spatial relational perception</a>.</li>



<li><a href="https://www.alylab.org/_files/ugd/1d2439_f3e97b69b76b4b0fbb19d8845192d7d4.pdf">Cholinergic modulation of hippocampally mediated attention and perception</a>.</li>



<li><a href="https://www.alylab.org/_files/ugd/1d2439_7d24b5618ddc469b883297c8affdbe4f.pdf">Preparation for upcoming attentional states in the hippocampus and medial prefrontal cortex</a>.</li>



<li><a href="https://www.alylab.org/_files/ugd/1d2439_f6d1f29d60f64e92b8c7a36109de3e54.pdf">How hippocampal memory shapes, and is shaped by, attention</a>.</li>



<li><a href="https://psyarxiv.com/j32bn">Attentional fluctuations and the temporal organization of memory</a>.</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
3:50 - Mariam's background
9:32 - Hippocampus history and current science
12:34 - hippocampus and perception
13:42 - Relational information
18:30 - How much memory is explicit?
22:32 - How attention affects hippocampus
32:40 - fMRI levels vs. stability
39:04 - How is hippocampus necessary for attention
57:00 - How much does attention affect memory?
1:02:24 - How memory affects attention
1:06:50 - Attention and memory relation big picture
1:07:42 - Current state of memory and attention
1:12:12 - Modularity
1:17:52 - Practical advice to improve attention/memory
1:21:22 - Mariam's challenges</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about whats missing in AI and Neuroscience











Mariam Aly runs the Aly lab at Columbia University, where she studies the int]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>







<p>Mariam Aly runs the Aly lab at Columbia University, where she studies the interaction of memory, attention, and perception in brain regions like the hippocampus. The short story is that memory affects our perceptions, attention affects our memories, memories affect our attention, and these effects have signatures in neural activity measurements in our hippocampus and other brain areas. We discuss her experiments testing the nature of those interactions. We also discuss a particularly difficult stretch in Mariam's graduate school years, and how she now prioritizes her mental health.</p>



<ul class="wp-block-list">
<li><a href="https://www.alylab.org/">Aly Lab</a>.</li>



<li>Twitter:&nbsp;<a href="https://twitter.com/mariam_s_aly">@mariam_s_aly</a>.</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.alylab.org/_files/ugd/1d2439_764d2d599b94432090070316796bb6f8.pdf">Attention promotes episodic encoding by stabilizing hippocampal representations</a>.</li>



<li><a href="https://www.alylab.org/_files/ugd/1d2439_f834876a78cc441dbcd23831023dcb77.pdf">The medial temporal lobe is critical for spatial relational perception</a>.</li>



<li><a href="https://www.alylab.org/_files/ugd/1d2439_f3e97b69b76b4b0fbb19d8845192d7d4.pdf">Cholinergic modulation of hippocampally mediated attention and perception</a>.</li>



<li><a href="https://www.alylab.org/_files/ugd/1d2439_7d24b5618ddc469b883297c8affdbe4f.pdf">Preparation for upcoming attentional states in the hippocampus and medial prefrontal cortex</a>.</li>



<li><a href="https://www.alylab.org/_files/ugd/1d2439_f6d1f29d60f64e92b8c7a36109de3e54.pdf">How hippocampal memory shapes, and is shaped by, attention</a>.</li>



<li><a href="https://psyarxiv.com/j32bn">Attentional fluctuations and the temporal organization of memory</a>.</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
3:50 - Mariam's background
9:32 - Hippocampus history and current science
12:34 - hippocampus and perception
13:42 - Relational information
18:30 - How much memory is explicit?
22:32 - How attention affects hippocampus
32:40 - fMRI levels vs. stability
39:04 - How is hippocampus necessary for attention
57:00 - How much does attention affect memory?
1:02:24 - How memory affects attention
1:06:50 - Attention and memory relation big picture
1:07:42 - Current state of memory and attention
1:12:12 - Modularity
1:17:52 - Practical advice to improve attention/memory
1:21:22 - Mariam's challenges</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2124/156.mp3" length="97028589" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience











Mariam Aly runs the Aly lab at Columbia University, where she studies the interaction of memory, attention, and perception in brain regions like the hippocampus. The short story is that memory affects our perceptions, attention affects our memories, memories affect our attention, and these effects have signatures in neural activity measurements in our hippocampus and other brain areas. We discuss her experiments testing the nature of those interactions. We also discuss a particularly difficult stretch in Mariam's graduate school years, and how she now prioritizes her mental health.




Aly Lab.



Twitter:&nbsp;@mariam_s_aly.



Related papers

Attention promotes episodic encoding by stabilizing hippocampal representations.



The medial temporal lobe is critical for spatial relational perception.



Cholinergic modulation of hippocampally mediated attention and perception.



Preparation for upcoming attentional states in the hippocampus and medial prefrontal cortex.



How hippocampal memory shapes, and is shaped by, attention.



Attentional fluctuations and the temporal organization of memory.






0:00 - Intro
3:50 - Mariam's background
9:32 - Hippocampus history and current science
12:34 - hippocampus and perception
13:42 - Relational information
18:30 - How much memory is explicit?
22:32 - How attention affects hippocampus
32:40 - fMRI levels vs. stability
39:04 - How is hippocampus necessary for attention
57:00 - How much does attention affect memory?
1:02:24 - How memory affects attention
1:06:50 - Attention and memory relation big picture
1:07:42 - Current state of memory and attention
1:12:12 - Modularity
1:17:52 - Practical advice to improve attention/memory
1:21:22 - Mariam's challenges]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2022/12/156-Mariam-Aly-website.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:40:45</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience











Mariam Aly runs the Aly lab at Columbia University, where she studies the interaction of memory, attention, and perception in brain regions like the hippocampus. The short story is that memory affects our perceptions, attention affects our memories, memories affect our attention, and these effects have signatures in neural activity measurements in our hippocampus and other brain areas. We discuss her experiments testing the nature of those interactions. We also discuss a particularly difficult stretch in Mariam's graduate school years, and how she now prioritizes her mental health.




Aly Lab.



Twitter:&nbsp;@mariam_s_aly.



Related papers

Attention promotes episodic encoding by stabilizing hippocampal representations.



The medial temporal lobe is critical for spatial relational perception.



Cholinerg]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2022/12/156-Mariam-Aly-website.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 155 Luiz Pessoa: The Entangled Brain</title>
	<link>https://braininspired.co/podcast/155/</link>
	<pubDate>Sat, 10 Dec 2022 06:46:08 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2119</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>











<p>Luiz Pessoa runs his <a href="https://lce.umd.edu/">Laboratory of Cognition and Emotion</a> at the University of Maryland, College Park, where he studies how emotion and cognition interact. On this episode, we discuss many of the topics from his latest book, <a href="https://amzn.to/3VKZPcm">The Entangled Brain: How Perception, Cognition, and Emotion Are Woven Together</a>, which is aimed at a general audience. The book argues we need to re-think how to study the brain. Traditionally, cognitive functions of the brain have been studied in a modular fashion: area X <em>does</em> function Y. However, modern research has revealed the brain is highly complex and carries out cognitive functions in a much more interactive and integrative fashion: a given cognitive function results from many areas and circuits temporarily coalescing (for similar ideas, see also <a href="https://braininspired.co/podcast/152/">BI 152 Michael L. Anderson: After Phrenology: Neural Reuse</a>). Luiz and I discuss the implications of studying the brain from a complex systems perspective, why we need go beyond thinking about anatomy and instead think about functional organization, some of the brain's principles of organization, and a lot more.</p>





<ul class="wp-block-list">
<li><a href="https://lce.umd.edu/">Laboratory of Cognition and Emotion</a>.</li>



<li>Twitter:&nbsp;<a href="https://twitter.com/PessoaBrain">@PessoaBrain</a>.</li>



<li>Book: <a href="https://amzn.to/3VKZPcm">The Entangled Brain: How Perception, Cognition, and Emotion Are Woven Together</a></li>
</ul>



<p>0:00 - Intro
2:47 - The Entangled Brain
16:24 - How to think about complex systems
23:41 - Modularity thinking
28:16 - How to train one's mind to think complex
33:26 - Problem or principle?
44:22 - Complex behaviors
47:06 - Organization vs. structure
51:09 - Principles of organization: Massive Combinatorial Anatomical Connectivity
55:15 - Principles of organization: High Distributed Functional Connectivity
1:00:50 - Principles of organization: Networks as Functional Units
1:06:15 - Principles of Organization: Interactions via Cortical-Subcortical Loops
1:08:53 - Open and closed loops
1:16:43 - Principles of organization: Connectivity with the Body
1:21:28 - Consciousness
1:24:53 - Emotions
1:32:49 - Emottions and AI
1:39:47 - Emotion as a concept
1:43:25 - Complexity and functional organization in AI</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about whats missing in AI and Neuroscience















Luiz Pessoa runs his Laboratory of Cognition and Emotion at the Universit]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>











<p>Luiz Pessoa runs his <a href="https://lce.umd.edu/">Laboratory of Cognition and Emotion</a> at the University of Maryland, College Park, where he studies how emotion and cognition interact. On this episode, we discuss many of the topics from his latest book, <a href="https://amzn.to/3VKZPcm">The Entangled Brain: How Perception, Cognition, and Emotion Are Woven Together</a>, which is aimed at a general audience. The book argues we need to re-think how to study the brain. Traditionally, cognitive functions of the brain have been studied in a modular fashion: area X <em>does</em> function Y. However, modern research has revealed the brain is highly complex and carries out cognitive functions in a much more interactive and integrative fashion: a given cognitive function results from many areas and circuits temporarily coalescing (for similar ideas, see also <a href="https://braininspired.co/podcast/152/">BI 152 Michael L. Anderson: After Phrenology: Neural Reuse</a>). Luiz and I discuss the implications of studying the brain from a complex systems perspective, why we need go beyond thinking about anatomy and instead think about functional organization, some of the brain's principles of organization, and a lot more.</p>





<ul class="wp-block-list">
<li><a href="https://lce.umd.edu/">Laboratory of Cognition and Emotion</a>.</li>



<li>Twitter:&nbsp;<a href="https://twitter.com/PessoaBrain">@PessoaBrain</a>.</li>



<li>Book: <a href="https://amzn.to/3VKZPcm">The Entangled Brain: How Perception, Cognition, and Emotion Are Woven Together</a></li>
</ul>



<p>0:00 - Intro
2:47 - The Entangled Brain
16:24 - How to think about complex systems
23:41 - Modularity thinking
28:16 - How to train one's mind to think complex
33:26 - Problem or principle?
44:22 - Complex behaviors
47:06 - Organization vs. structure
51:09 - Principles of organization: Massive Combinatorial Anatomical Connectivity
55:15 - Principles of organization: High Distributed Functional Connectivity
1:00:50 - Principles of organization: Networks as Functional Units
1:06:15 - Principles of Organization: Interactions via Cortical-Subcortical Loops
1:08:53 - Open and closed loops
1:16:43 - Principles of organization: Connectivity with the Body
1:21:28 - Consciousness
1:24:53 - Emotions
1:32:49 - Emottions and AI
1:39:47 - Emotion as a concept
1:43:25 - Complexity and functional organization in AI</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2119/155.mp3" length="110167230" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience















Luiz Pessoa runs his Laboratory of Cognition and Emotion at the University of Maryland, College Park, where he studies how emotion and cognition interact. On this episode, we discuss many of the topics from his latest book, The Entangled Brain: How Perception, Cognition, and Emotion Are Woven Together, which is aimed at a general audience. The book argues we need to re-think how to study the brain. Traditionally, cognitive functions of the brain have been studied in a modular fashion: area X does function Y. However, modern research has revealed the brain is highly complex and carries out cognitive functions in a much more interactive and integrative fashion: a given cognitive function results from many areas and circuits temporarily coalescing (for similar ideas, see also BI 152 Michael L. Anderson: After Phrenology: Neural Reuse). Luiz and I discuss the implications of studying the brain from a complex systems perspective, why we need go beyond thinking about anatomy and instead think about functional organization, some of the brain's principles of organization, and a lot more.






Laboratory of Cognition and Emotion.



Twitter:&nbsp;@PessoaBrain.



Book: The Entangled Brain: How Perception, Cognition, and Emotion Are Woven Together




0:00 - Intro
2:47 - The Entangled Brain
16:24 - How to think about complex systems
23:41 - Modularity thinking
28:16 - How to train one's mind to think complex
33:26 - Problem or principle?
44:22 - Complex behaviors
47:06 - Organization vs. structure
51:09 - Principles of organization: Massive Combinatorial Anatomical Connectivity
55:15 - Principles of organization: High Distributed Functional Connectivity
1:00:50 - Principles of organization: Networks as Functional Units
1:06:15 - Principles of Organization: Interactions via Cortical-Subcortical Loops
1:08:53 - Open and closed loops
1:16:43 - Principles of organization: Connectivity with the Body
1:21:28 - Consciousness
1:24:53 - Emotions
1:32:49 - Emottions and AI
1:39:47 - Emotion as a concept
1:43:25 - Complexity and functional organization in AI]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2022/12/155-Luiz-Pessoa-website.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:54:26</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience















Luiz Pessoa runs his Laboratory of Cognition and Emotion at the University of Maryland, College Park, where he studies how emotion and cognition interact. On this episode, we discuss many of the topics from his latest book, The Entangled Brain: How Perception, Cognition, and Emotion Are Woven Together, which is aimed at a general audience. The book argues we need to re-think how to study the brain. Traditionally, cognitive functions of the brain have been studied in a modular fashion: area X does function Y. However, modern research has revealed the brain is highly complex and carries out cognitive functions in a much more interactive and integrative fashion: a given cognitive function results from many areas and circuits temporarily coalescing (for similar ideas, see also BI 152 Michael L. Anderson: After]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2022/12/155-Luiz-Pessoa-website.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 154 Anne Collins: Learning with Working Memory</title>
	<link>https://braininspired.co/podcast/154/</link>
	<pubDate>Tue, 29 Nov 2022 02:45:04 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2114</guid>
	<description><![CDATA[<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Anne Collins runs her&nbsp; <a href="https://ccn.berkeley.edu/">Computational Cognitive Neuroscience Lab</a> at the University of California, Berkley One of the things she's been working on for years is how our working memory plays a role in learning as well, and specifically how working memory and reinforcement learning interact to affect how we learn, depending on the nature of what we're trying to learn. We discuss that interaction specifically. We also discuss more broadly how segregated and how overlapping and interacting our cognitive functions are, what that implies about our natural tendency to think in dichotomies - like MF vs MB-RL, system-1 vs system-2, etc., and we dive into plenty other subjects, like how to possibly incorporate these ideas into AI.</p>



<ul class="wp-block-list">
<li><a href="https://ccn.berkeley.edu/">Computational Cognitive Neuroscience Lab</a>.</li>



<li>Twitter:&nbsp;<a href="https://twitter.com/ccnlab">@ccnlab</a> or <a href="https://twitter.com/Anne_on_Tw">@Anne_On_Tw</a>.</li>



<li>Related papers:
<ul class="wp-block-list">
<li><a href="https://ccn.berkeley.edu/pdfs/papers/YooCollins2022JoCN_WMRL.pdf">How Working Memory and Reinforcement Learning Are Intertwined: A Cognitive, Neural, and Computational Perspective</a>.&nbsp;</li>



<li><a href="https://ccn.berkeley.edu/pdfs/papers/MBMF_NatureReviews_R2.pdf">Beyond simple dichotomies in reinforcement learning</a>.</li>



<li><a href="https://ccn.berkeley.edu/pdfs/papers/EFshapesRL2020_R1.pdf">The Role of Executive Function in Shaping Reinforcement Learning</a>.</li>



<li><a href="https://ccn.berkeley.edu/pdfs/papers/EcksteinWilbrechtCollins_2021.pdf">What do reinforcement learning models measure? Interpreting model parameters in cognition and neuroscience</a>.</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
5:25 - Dimensionality of learning
11:19 - Modularity of function and computations
16:51 - Is working memory a thing?
19:33 - Model-free model-based dichotomy
30:40 - Working memory and RL
44:43 - How working memory and RL interact
50:50 - Working memory and attention
59:37 - Computations vs. implementations
1:03:25 - Interpreting results
1:08:00 - Working memory and AI</p>]]></description>
	<itunes:subtitle><![CDATA[Check out my free video series about whats missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.









Anne Collins runs her&nbsp; Computational Cognitive Neuroscience Lab at the Uni]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Anne Collins runs her&nbsp; <a href="https://ccn.berkeley.edu/">Computational Cognitive Neuroscience Lab</a> at the University of California, Berkley One of the things she's been working on for years is how our working memory plays a role in learning as well, and specifically how working memory and reinforcement learning interact to affect how we learn, depending on the nature of what we're trying to learn. We discuss that interaction specifically. We also discuss more broadly how segregated and how overlapping and interacting our cognitive functions are, what that implies about our natural tendency to think in dichotomies - like MF vs MB-RL, system-1 vs system-2, etc., and we dive into plenty other subjects, like how to possibly incorporate these ideas into AI.</p>



<ul class="wp-block-list">
<li><a href="https://ccn.berkeley.edu/">Computational Cognitive Neuroscience Lab</a>.</li>



<li>Twitter:&nbsp;<a href="https://twitter.com/ccnlab">@ccnlab</a> or <a href="https://twitter.com/Anne_on_Tw">@Anne_On_Tw</a>.</li>



<li>Related papers:
<ul class="wp-block-list">
<li><a href="https://ccn.berkeley.edu/pdfs/papers/YooCollins2022JoCN_WMRL.pdf">How Working Memory and Reinforcement Learning Are Intertwined: A Cognitive, Neural, and Computational Perspective</a>.&nbsp;</li>



<li><a href="https://ccn.berkeley.edu/pdfs/papers/MBMF_NatureReviews_R2.pdf">Beyond simple dichotomies in reinforcement learning</a>.</li>



<li><a href="https://ccn.berkeley.edu/pdfs/papers/EFshapesRL2020_R1.pdf">The Role of Executive Function in Shaping Reinforcement Learning</a>.</li>



<li><a href="https://ccn.berkeley.edu/pdfs/papers/EcksteinWilbrechtCollins_2021.pdf">What do reinforcement learning models measure? Interpreting model parameters in cognition and neuroscience</a>.</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
5:25 - Dimensionality of learning
11:19 - Modularity of function and computations
16:51 - Is working memory a thing?
19:33 - Model-free model-based dichotomy
30:40 - Working memory and RL
44:43 - How working memory and RL interact
50:50 - Working memory and attention
59:37 - Computations vs. implementations
1:03:25 - Interpreting results
1:08:00 - Working memory and AI</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2114/154.mp3" length="79451920" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Check out my free video series about what's missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.









Anne Collins runs her&nbsp; Computational Cognitive Neuroscience Lab at the University of California, Berkley One of the things she's been working on for years is how our working memory plays a role in learning as well, and specifically how working memory and reinforcement learning interact to affect how we learn, depending on the nature of what we're trying to learn. We discuss that interaction specifically. We also discuss more broadly how segregated and how overlapping and interacting our cognitive functions are, what that implies about our natural tendency to think in dichotomies - like MF vs MB-RL, system-1 vs system-2, etc., and we dive into plenty other subjects, like how to possibly incorporate these ideas into AI.




Computational Cognitive Neuroscience Lab.



Twitter:&nbsp;@ccnlab or @Anne_On_Tw.



Related papers:

How Working Memory and Reinforcement Learning Are Intertwined: A Cognitive, Neural, and Computational Perspective.&nbsp;



Beyond simple dichotomies in reinforcement learning.



The Role of Executive Function in Shaping Reinforcement Learning.



What do reinforcement learning models measure? Interpreting model parameters in cognition and neuroscience.






0:00 - Intro
5:25 - Dimensionality of learning
11:19 - Modularity of function and computations
16:51 - Is working memory a thing?
19:33 - Model-free model-based dichotomy
30:40 - Working memory and RL
44:43 - How working memory and RL interact
50:50 - Working memory and attention
59:37 - Computations vs. implementations
1:03:25 - Interpreting results
1:08:00 - Working memory and AI]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2022/11/154-Anne-Collins-3.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:22:27</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Check out my free video series about what's missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.









Anne Collins runs her&nbsp; Computational Cognitive Neuroscience Lab at the University of California, Berkley One of the things she's been working on for years is how our working memory plays a role in learning as well, and specifically how working memory and reinforcement learning interact to affect how we learn, depending on the nature of what we're trying to learn. We discuss that interaction specifically. We also discuss more broadly how segregated and how overlapping and interacting our cognitive functions are, what that implies about our natural tendency to think in dichotomies - like MF vs MB-RL, system-1 vs system-2, etc., and we dive into plenty other subjects, like how to possibly incorporate these ideas into AI.




Computational Cognitive Neuroscience Lab.



Twitter:&nbsp;@ccnlab or @Anne_On_Tw.



]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2022/11/154-Anne-Collins-3.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 153 Carolyn Dicey-Jennings: Attention and the Self</title>
	<link>https://braininspired.co/podcast/153/</link>
	<pubDate>Fri, 18 Nov 2022 15:39:58 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2108</guid>
	<description><![CDATA[<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Carolyn Dicey Jennings is a philosopher and a cognitive scientist at University of California, Merced. In her book <a href="https://amzn.to/3Temj48">The Attending Mind</a>, she lays out an attempt to unify the concept of attention. Carolyn defines attention roughly as the mental prioritization of some stuff over other stuff based on our collective interests. And one of her main claims is that attention is evidence of a real, emergent self or subject, that can't be reduced to microscopic brain activity. She does connect attention to more macroscopic brain activity, suggesting slow longer-range oscillations in our brains can alter or entrain the activity of more local neural activity, and this is a candidate for mental causation. We unpack that more in our discussion, and how Carolyn situates attention among other cognitive functions, like consciousness, action, and perception.</p>





<ul class="wp-block-list">
<li><a href="http://faculty.ucmerced.edu/cjennings3/#">Carolyn's website</a>.</li>



<li>Books:
<ul class="wp-block-list">
<li><a href="https://amzn.to/3Temj48">The Attending Mind</a>.</li>
</ul>
</li>



<li>Aeon article:
<ul class="wp-block-list">
<li><a href="https://aeon.co/essays/what-is-the-self-if-not-that-which-pays-attention">I Attend, Therefore I Am</a>.</li>
</ul>
</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="http://faculty.ucmerced.edu/cjennings3/Synthese.pdf">The Subject of Attention</a>.</li>



<li><a href="http://faculty.ucmerced.edu/cjennings3/ConsciousnessMind.pdf">Consciousness and Mind</a>.</li>



<li><a href="https://philpapers.org/archive/JENPRA-2.pdf">Practical Realism about the Self</a>.</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
12:15 - Reconceptualizing attention
16:07 - Types of attention
19:02 - Predictive processing and attention
23:19 - Consciousness, identity, and self
30:39 - Attention and the brain
35:47 - Integrated information theory
42:05 - Neural attention
52:08 - Decoupling oscillations from spikes
57:16 - Selves in other organisms
1:00:42 - AI and the self
1:04:43 - Attention, consciousness, conscious perception
1:08:36 - Meaning and attention
1:11:12 - Conscious entrainment
1:19:57 - Is attention a switch or knob?</p>]]></description>
	<itunes:subtitle><![CDATA[Check out my free video series about whats missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.









Carolyn Dicey Jennings is a philosopher and a cognitive scientist at University]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Carolyn Dicey Jennings is a philosopher and a cognitive scientist at University of California, Merced. In her book <a href="https://amzn.to/3Temj48">The Attending Mind</a>, she lays out an attempt to unify the concept of attention. Carolyn defines attention roughly as the mental prioritization of some stuff over other stuff based on our collective interests. And one of her main claims is that attention is evidence of a real, emergent self or subject, that can't be reduced to microscopic brain activity. She does connect attention to more macroscopic brain activity, suggesting slow longer-range oscillations in our brains can alter or entrain the activity of more local neural activity, and this is a candidate for mental causation. We unpack that more in our discussion, and how Carolyn situates attention among other cognitive functions, like consciousness, action, and perception.</p>





<ul class="wp-block-list">
<li><a href="http://faculty.ucmerced.edu/cjennings3/#">Carolyn's website</a>.</li>



<li>Books:
<ul class="wp-block-list">
<li><a href="https://amzn.to/3Temj48">The Attending Mind</a>.</li>
</ul>
</li>



<li>Aeon article:
<ul class="wp-block-list">
<li><a href="https://aeon.co/essays/what-is-the-self-if-not-that-which-pays-attention">I Attend, Therefore I Am</a>.</li>
</ul>
</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="http://faculty.ucmerced.edu/cjennings3/Synthese.pdf">The Subject of Attention</a>.</li>



<li><a href="http://faculty.ucmerced.edu/cjennings3/ConsciousnessMind.pdf">Consciousness and Mind</a>.</li>



<li><a href="https://philpapers.org/archive/JENPRA-2.pdf">Practical Realism about the Self</a>.</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
12:15 - Reconceptualizing attention
16:07 - Types of attention
19:02 - Predictive processing and attention
23:19 - Consciousness, identity, and self
30:39 - Attention and the brain
35:47 - Integrated information theory
42:05 - Neural attention
52:08 - Decoupling oscillations from spikes
57:16 - Selves in other organisms
1:00:42 - AI and the self
1:04:43 - Attention, consciousness, conscious perception
1:08:36 - Meaning and attention
1:11:12 - Conscious entrainment
1:19:57 - Is attention a switch or knob?</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2108/153.mp3" length="82384918" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Check out my free video series about what's missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.









Carolyn Dicey Jennings is a philosopher and a cognitive scientist at University of California, Merced. In her book The Attending Mind, she lays out an attempt to unify the concept of attention. Carolyn defines attention roughly as the mental prioritization of some stuff over other stuff based on our collective interests. And one of her main claims is that attention is evidence of a real, emergent self or subject, that can't be reduced to microscopic brain activity. She does connect attention to more macroscopic brain activity, suggesting slow longer-range oscillations in our brains can alter or entrain the activity of more local neural activity, and this is a candidate for mental causation. We unpack that more in our discussion, and how Carolyn situates attention among other cognitive functions, like consciousness, action, and perception.






Carolyn's website.



Books:

The Attending Mind.





Aeon article:

I Attend, Therefore I Am.





Related papers

The Subject of Attention.



Consciousness and Mind.



Practical Realism about the Self.






0:00 - Intro
12:15 - Reconceptualizing attention
16:07 - Types of attention
19:02 - Predictive processing and attention
23:19 - Consciousness, identity, and self
30:39 - Attention and the brain
35:47 - Integrated information theory
42:05 - Neural attention
52:08 - Decoupling oscillations from spikes
57:16 - Selves in other organisms
1:00:42 - AI and the self
1:04:43 - Attention, consciousness, conscious perception
1:08:36 - Meaning and attention
1:11:12 - Conscious entrainment
1:19:57 - Is attention a switch or knob?]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2022/11/art-153-02.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:25:30</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Check out my free video series about what's missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.









Carolyn Dicey Jennings is a philosopher and a cognitive scientist at University of California, Merced. In her book The Attending Mind, she lays out an attempt to unify the concept of attention. Carolyn defines attention roughly as the mental prioritization of some stuff over other stuff based on our collective interests. And one of her main claims is that attention is evidence of a real, emergent self or subject, that can't be reduced to microscopic brain activity. She does connect attention to more macroscopic brain activity, suggesting slow longer-range oscillations in our brains can alter or entrain the activity of more local neural activity, and this is a candidate for mental causation. We unpack that more in our discussion, and how Carolyn situates attention among other cognitive functions, like consciousne]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2022/11/art-153-02.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 152 Michael L. Anderson: After Phrenology: Neural Reuse</title>
	<link>https://braininspired.co/podcast/152/</link>
	<pubDate>Tue, 08 Nov 2022 16:04:39 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2103</guid>
	<description><![CDATA[<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Michael L. Anderson is a professor at the Rotman Institute of Philosophy, at Western University. His book, <a href="https://amzn.to/3BYCs8m">After Phrenology: Neural Reuse and the Interactive Brain</a>, calls for a re-conceptualization of how we understand and study brains and minds. Neural reuse is the phenomenon that any given brain area is active for multiple cognitive functions, and partners with different sets of brain areas to carry out different cognitive functions. We discuss the implications for this, and other topics in Michael's research and the book, like evolution, embodied cognition, and Gibsonian perception. Michael also fields guest questions from <a href="https://braininspired.co/podcast/77/">John Krakauer</a> and <a href="https://braininspired.co/podcast/136/">Alex Gomez-Marin</a>, about representations and metaphysics, respectively.</p>





<ul class="wp-block-list">
<li><a href="https://www.rotman.uwo.ca/portfolio-items/anderson-michael-l/">Michael's website</a>.</li>



<li>Twitter:&nbsp;<a href="https://twitter.com/mljanderson">@mljanderson</a>.</li>



<li>Book:
<ul class="wp-block-list">
<li><a href="https://amzn.to/3BYCs8m">After Phrenology: Neural Reuse and the Interactive Brain</a>.</li>
</ul>
</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.academia.edu/en/278902/Neural_reuse_a_fundamental_organizational_principle_of_the_brain">Neural reuse: a fundamental organizational principle of the brain.</a></li>



<li><a href="http://philsci-archive.pitt.edu/20003/1/AndersonChampion2021.pdf">Some dilemmas for an account of neural representation: A reply to Poldrack.</a></li>



<li><a href="http://philsci-archive.pitt.edu/20426/1/Davies-Barton%20et%20al.%20(2022).pdf">Debt-free intelligence: Ecological information in minds and machines</a></li>



<li><a href="https://europepmc.org/backend/ptpmcrender.fcgi?accid=PMC3756684&amp;blobtype=pdf">Describing functional diversity of brain regions and brain networks</a>.</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
3:02 - After Phrenology
13:18 - Typical neuroscience experiment
16:29 - Neural reuse
18:37 - 4E cognition and representations
22:48 - John Krakauer question
27:38 - Gibsonian perception
36:17 - Autoencoders without representations
49:22 - Pluralism
52:42 - Alex Gomez-Marin question - metaphysics
1:01:26 - Stimulus-response historical neuroscience
1:10:59 - After Phrenology influence
1:19:24 - Origins of neural reuse
1:35:25 - The way forward</p>]]></description>
	<itunes:subtitle><![CDATA[Check out my free video series about whats missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.









Michael L. Anderson is a professor at the Rotman Institute of Philosophy, at We]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Michael L. Anderson is a professor at the Rotman Institute of Philosophy, at Western University. His book, <a href="https://amzn.to/3BYCs8m">After Phrenology: Neural Reuse and the Interactive Brain</a>, calls for a re-conceptualization of how we understand and study brains and minds. Neural reuse is the phenomenon that any given brain area is active for multiple cognitive functions, and partners with different sets of brain areas to carry out different cognitive functions. We discuss the implications for this, and other topics in Michael's research and the book, like evolution, embodied cognition, and Gibsonian perception. Michael also fields guest questions from <a href="https://braininspired.co/podcast/77/">John Krakauer</a> and <a href="https://braininspired.co/podcast/136/">Alex Gomez-Marin</a>, about representations and metaphysics, respectively.</p>





<ul class="wp-block-list">
<li><a href="https://www.rotman.uwo.ca/portfolio-items/anderson-michael-l/">Michael's website</a>.</li>



<li>Twitter:&nbsp;<a href="https://twitter.com/mljanderson">@mljanderson</a>.</li>



<li>Book:
<ul class="wp-block-list">
<li><a href="https://amzn.to/3BYCs8m">After Phrenology: Neural Reuse and the Interactive Brain</a>.</li>
</ul>
</li>



<li>Related papers
<ul class="wp-block-list">
<li><a href="https://www.academia.edu/en/278902/Neural_reuse_a_fundamental_organizational_principle_of_the_brain">Neural reuse: a fundamental organizational principle of the brain.</a></li>



<li><a href="http://philsci-archive.pitt.edu/20003/1/AndersonChampion2021.pdf">Some dilemmas for an account of neural representation: A reply to Poldrack.</a></li>



<li><a href="http://philsci-archive.pitt.edu/20426/1/Davies-Barton%20et%20al.%20(2022).pdf">Debt-free intelligence: Ecological information in minds and machines</a></li>



<li><a href="https://europepmc.org/backend/ptpmcrender.fcgi?accid=PMC3756684&amp;blobtype=pdf">Describing functional diversity of brain regions and brain networks</a>.</li>
</ul>
</li>
</ul>



<p>0:00 - Intro
3:02 - After Phrenology
13:18 - Typical neuroscience experiment
16:29 - Neural reuse
18:37 - 4E cognition and representations
22:48 - John Krakauer question
27:38 - Gibsonian perception
36:17 - Autoencoders without representations
49:22 - Pluralism
52:42 - Alex Gomez-Marin question - metaphysics
1:01:26 - Stimulus-response historical neuroscience
1:10:59 - After Phrenology influence
1:19:24 - Origins of neural reuse
1:35:25 - The way forward</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2103/152.mp3" length="101281344" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Check out my free video series about what's missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.









Michael L. Anderson is a professor at the Rotman Institute of Philosophy, at Western University. His book, After Phrenology: Neural Reuse and the Interactive Brain, calls for a re-conceptualization of how we understand and study brains and minds. Neural reuse is the phenomenon that any given brain area is active for multiple cognitive functions, and partners with different sets of brain areas to carry out different cognitive functions. We discuss the implications for this, and other topics in Michael's research and the book, like evolution, embodied cognition, and Gibsonian perception. Michael also fields guest questions from John Krakauer and Alex Gomez-Marin, about representations and metaphysics, respectively.






Michael's website.



Twitter:&nbsp;@mljanderson.



Book:

After Phrenology: Neural Reuse and the Interactive Brain.





Related papers

Neural reuse: a fundamental organizational principle of the brain.



Some dilemmas for an account of neural representation: A reply to Poldrack.



Debt-free intelligence: Ecological information in minds and machines



Describing functional diversity of brain regions and brain networks.






0:00 - Intro
3:02 - After Phrenology
13:18 - Typical neuroscience experiment
16:29 - Neural reuse
18:37 - 4E cognition and representations
22:48 - John Krakauer question
27:38 - Gibsonian perception
36:17 - Autoencoders without representations
49:22 - Pluralism
52:42 - Alex Gomez-Marin question - metaphysics
1:01:26 - Stimulus-response historical neuroscience
1:10:59 - After Phrenology influence
1:19:24 - Origins of neural reuse
1:35:25 - The way forward]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2022/11/thumbScreen-2.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:45:11</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Check out my free video series about what's missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.









Michael L. Anderson is a professor at the Rotman Institute of Philosophy, at Western University. His book, After Phrenology: Neural Reuse and the Interactive Brain, calls for a re-conceptualization of how we understand and study brains and minds. Neural reuse is the phenomenon that any given brain area is active for multiple cognitive functions, and partners with different sets of brain areas to carry out different cognitive functions. We discuss the implications for this, and other topics in Michael's research and the book, like evolution, embodied cognition, and Gibsonian perception. Michael also fields guest questions from John Krakauer and Alex Gomez-Marin, about representations and metaphysics, respectively.






Michael's website.



Twitter:&nbsp;@mljanderson.



Book:

After Phrenology: Neural Reuse and]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2022/11/thumbScreen-2.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 151 Steve Byrnes: Brain-like AGI Safety</title>
	<link>https://braininspired.co/podcast/151/</link>
	<pubDate>Sun, 30 Oct 2022 16:48:42 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2093</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>Steve Byrnes is a physicist turned AGI safety researcher. He's concerned that when we create AGI, whenever and however that might happen, we run the risk of creating it in a less than perfectly safe way. AGI safety (AGI not doing something bad) is a wide net that encompasses AGI alignment (AGI doing what we want it to do). We discuss a host of ideas Steve writes about in his <a href="https://www.alignmentforum.org/s/HzcM2dkCq7fwXBej8">Intro to Brain-Like-AGI Safety</a> blog series, which uses what he has learned about brains to address how we might safely make AGI.</p>







<ul class="wp-block-list"><li><a href="https://sjbyrnes.com/index.html">Steve's website</a>.</li><li>Twitter:&nbsp;<a href="https://twitter.com/steve47285">@steve47285</a></li><li><a href="https://www.alignmentforum.org/s/HzcM2dkCq7fwXBej8">Intro to Brain-Like-AGI Safety</a>.</li></ul>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











Steve Byrnes is a physicist turned AGI safety researcher. Hes concerned that when we create AGI, whenever and however that might happen, we run the risk of cr]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>Steve Byrnes is a physicist turned AGI safety researcher. He's concerned that when we create AGI, whenever and however that might happen, we run the risk of creating it in a less than perfectly safe way. AGI safety (AGI not doing something bad) is a wide net that encompasses AGI alignment (AGI doing what we want it to do). We discuss a host of ideas Steve writes about in his <a href="https://www.alignmentforum.org/s/HzcM2dkCq7fwXBej8">Intro to Brain-Like-AGI Safety</a> blog series, which uses what he has learned about brains to address how we might safely make AGI.</p>







<ul class="wp-block-list"><li><a href="https://sjbyrnes.com/index.html">Steve's website</a>.</li><li>Twitter:&nbsp;<a href="https://twitter.com/steve47285">@steve47285</a></li><li><a href="https://www.alignmentforum.org/s/HzcM2dkCq7fwXBej8">Intro to Brain-Like-AGI Safety</a>.</li></ul>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2093/151.mp3" length="87930401" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











Steve Byrnes is a physicist turned AGI safety researcher. He's concerned that when we create AGI, whenever and however that might happen, we run the risk of creating it in a less than perfectly safe way. AGI safety (AGI not doing something bad) is a wide net that encompasses AGI alignment (AGI doing what we want it to do). We discuss a host of ideas Steve writes about in his Intro to Brain-Like-AGI Safety blog series, which uses what he has learned about brains to address how we might safely make AGI.







Steve's website.Twitter:&nbsp;@steve47285Intro to Brain-Like-AGI Safety.]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2022/10/art-151-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:31:17</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.











Steve Byrnes is a physicist turned AGI safety researcher. He's concerned that when we create AGI, whenever and however that might happen, we run the risk of creating it in a less than perfectly safe way. AGI safety (AGI not doing something bad) is a wide net that encompasses AGI alignment (AGI doing what we want it to do). We discuss a host of ideas Steve writes about in his Intro to Brain-Like-AGI Safety blog series, which uses what he has learned about brains to address how we might safely make AGI.







Steve's website.Twitter:&nbsp;@steve47285Intro to Brain-Like-AGI Safety.]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2022/10/art-151-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 150 Dan Nicholson: Machines, Organisms, Processes</title>
	<link>https://braininspired.co/podcast/150/</link>
	<pubDate>Sat, 15 Oct 2022 17:48:12 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2031</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>







<p>Dan Nicholson is a philosopher at George Mason University. He incorporates the history of science and philosophy into modern analyses of our conceptions of processes related to life and organisms. He is also interested in re-orienting our conception of the universe as made fundamentally of things/substances, and replacing it with the idea the universe is made fundamentally of processes (process philosophy). In this episode, we both of those subjects, the why the "machine conception of the organism" is incorrect, how to apply these ideas to topics like neuroscience and artificial intelligence, and much more.</p>





<ul class="wp-block-list"><li><a href="https://philosophy.gmu.edu/people/dnicho">Dan's website</a>. <a href="https://scholar.google.com/citations?hl=en&amp;user=5gxpRPYAAAAJ&amp;view_op=list_works&amp;sortby=pubdate">Google Scholar</a>.</li><li>Twitter: <a href="https://twitter.com/NicholsonHPBio">@NicholsonHPBio</a></li><li>Book<ul><li><a href="https://amzn.to/3CNSjqu">Everything Flows: Towards a Processual Philosophy of Biology</a>.</li></ul></li><li>Related papers<ul><li><a href="https://philpapers.org/archive/NICITC.pdf">Is the Cell Really a Machine?</a></li><li><a href="https://philarchive.org/archive/NICTMC-2">The Machine Conception of the Organism in Development and Evolution: A Critical Analysis</a>.</li><li><a href="https://philpapers.org/archive/NICOBT-2.pdf">On Being the Right Size, Revisited: The Problem with Engineering Metaphors in Molecular Biology</a>.</li></ul></li><li>Related episode: <a href="https://braininspired.co/podcast/118/">BI 118 Johannes Jäger: Beyond Networks</a>.</li></ul>



<p>0:00 - Intro
2:49 - Philosophy and science
16:37 - Role of history
23:28 - What Is Life? And interaction with James Watson
38:37 - Arguments against the machine conception of organisms
49:08 - Organisms as streams (processes)
57:52 - Process philosophy
1:08:59 - Alfred North Whitehead
1:12:45 - Process and consciousness
1:22:16 - Artificial intelligence and process
1:31:47 - Language and symbols and processes</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about whats missing in AI and Neuroscience











Dan Nicholson is a philosopher at George Mason University. He incorporates th]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>







<p>Dan Nicholson is a philosopher at George Mason University. He incorporates the history of science and philosophy into modern analyses of our conceptions of processes related to life and organisms. He is also interested in re-orienting our conception of the universe as made fundamentally of things/substances, and replacing it with the idea the universe is made fundamentally of processes (process philosophy). In this episode, we both of those subjects, the why the "machine conception of the organism" is incorrect, how to apply these ideas to topics like neuroscience and artificial intelligence, and much more.</p>





<ul class="wp-block-list"><li><a href="https://philosophy.gmu.edu/people/dnicho">Dan's website</a>. <a href="https://scholar.google.com/citations?hl=en&amp;user=5gxpRPYAAAAJ&amp;view_op=list_works&amp;sortby=pubdate">Google Scholar</a>.</li><li>Twitter: <a href="https://twitter.com/NicholsonHPBio">@NicholsonHPBio</a></li><li>Book<ul><li><a href="https://amzn.to/3CNSjqu">Everything Flows: Towards a Processual Philosophy of Biology</a>.</li></ul></li><li>Related papers<ul><li><a href="https://philpapers.org/archive/NICITC.pdf">Is the Cell Really a Machine?</a></li><li><a href="https://philarchive.org/archive/NICTMC-2">The Machine Conception of the Organism in Development and Evolution: A Critical Analysis</a>.</li><li><a href="https://philpapers.org/archive/NICOBT-2.pdf">On Being the Right Size, Revisited: The Problem with Engineering Metaphors in Molecular Biology</a>.</li></ul></li><li>Related episode: <a href="https://braininspired.co/podcast/118/">BI 118 Johannes Jäger: Beyond Networks</a>.</li></ul>



<p>0:00 - Intro
2:49 - Philosophy and science
16:37 - Role of history
23:28 - What Is Life? And interaction with James Watson
38:37 - Arguments against the machine conception of organisms
49:08 - Organisms as streams (processes)
57:52 - Process philosophy
1:08:59 - Alfred North Whitehead
1:12:45 - Process and consciousness
1:22:16 - Artificial intelligence and process
1:31:47 - Language and symbols and processes</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2031/150.mp3" length="94839622" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience











Dan Nicholson is a philosopher at George Mason University. He incorporates the history of science and philosophy into modern analyses of our conceptions of processes related to life and organisms. He is also interested in re-orienting our conception of the universe as made fundamentally of things/substances, and replacing it with the idea the universe is made fundamentally of processes (process philosophy). In this episode, we both of those subjects, the why the "machine conception of the organism" is incorrect, how to apply these ideas to topics like neuroscience and artificial intelligence, and much more.





Dan's website. Google Scholar.Twitter: @NicholsonHPBioBookEverything Flows: Towards a Processual Philosophy of Biology.Related papersIs the Cell Really a Machine?The Machine Conception of the Organism in Development and Evolution: A Critical Analysis.On Being the Right Size, Revisited: The Problem with Engineering Metaphors in Molecular Biology.Related episode: BI 118 Johannes Jäger: Beyond Networks.



0:00 - Intro
2:49 - Philosophy and science
16:37 - Role of history
23:28 - What Is Life? And interaction with James Watson
38:37 - Arguments against the machine conception of organisms
49:08 - Organisms as streams (processes)
57:52 - Process philosophy
1:08:59 - Alfred North Whitehead
1:12:45 - Process and consciousness
1:22:16 - Artificial intelligence and process
1:31:47 - Language and symbols and processes]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2022/10/art-150-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:38:29</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience











Dan Nicholson is a philosopher at George Mason University. He incorporates the history of science and philosophy into modern analyses of our conceptions of processes related to life and organisms. He is also interested in re-orienting our conception of the universe as made fundamentally of things/substances, and replacing it with the idea the universe is made fundamentally of processes (process philosophy). In this episode, we both of those subjects, the why the "machine conception of the organism" is incorrect, how to apply these ideas to topics like neuroscience and artificial intelligence, and much more.





Dan's website. Google Scholar.Twitter: @NicholsonHPBioBookEverything Flows: Towards a Processual Philosophy of Biology.Related papersIs the Cell Really a Machine?The Machine Conception of the Organism ]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2022/10/art-150-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 149 William B. Miller: Cell Intelligence</title>
	<link>https://braininspired.co/podcast/149/</link>
	<pubDate>Wed, 05 Oct 2022 17:20:46 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2025</guid>
	<description><![CDATA[<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>William B. Miller is an ex-physician turned evolutionary biologist. In this episode, we discuss topics related to his new book, <a href="https://amzn.to/3d99TLm">Bioverse: How the Cellular World Contains the Secrets to Life's Biggest Questions</a>. The premise of the book is that all individual cells are intelligent in their own right, and possess a sense of self. From this, Bill makes the case that cells cooperate with other cells to engineer whole organisms that in turn serve as wonderful hosts for the myriad cell types. Further, our bodies are collections of our own cells (with our DNA), and an enormous amount and diversity of foreign cells - our microbiome - that communicate and cooperate with each other and with our own cells. We also discuss how cell intelligence compares to human intelligence, what Bill calls the "era of the cell" in science, how the future of medicine will harness the intelligence of cells and their cooperative nature, and much more.</p>





<ul class="wp-block-list"><li><a href="https://www.ourbioverse.com/">William's website</a>.</li><li>Twitter:&nbsp;<a href="https://twitter.com/billmillermd?lang=en">@BillMillerMD</a>.</li><li>Book: <a href="https://amzn.to/3d99TLm">Bioverse: How the Cellular World Contains the Secrets to Life's Biggest Questions</a>.</li></ul>



<p>0:00 - Intro
3:43 - Bioverse
7:29 - Bill's cell appreciation origins
17:03 - Microbiomes
27:01 - Complexity of microbiomes and the "Era of the cell"
46:00 - Robustness
55:05 - Cell vs. human intelligence
1:10:08 - Artificial intelligence
1:21:01 - Neuro-AI
1:25:53 - Hard problem of consciousness</p>]]></description>
	<itunes:subtitle><![CDATA[Check out my free video series about whats missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.











William B. Miller is an ex-physician turned evolutionary biologist. In this e]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>William B. Miller is an ex-physician turned evolutionary biologist. In this episode, we discuss topics related to his new book, <a href="https://amzn.to/3d99TLm">Bioverse: How the Cellular World Contains the Secrets to Life's Biggest Questions</a>. The premise of the book is that all individual cells are intelligent in their own right, and possess a sense of self. From this, Bill makes the case that cells cooperate with other cells to engineer whole organisms that in turn serve as wonderful hosts for the myriad cell types. Further, our bodies are collections of our own cells (with our DNA), and an enormous amount and diversity of foreign cells - our microbiome - that communicate and cooperate with each other and with our own cells. We also discuss how cell intelligence compares to human intelligence, what Bill calls the "era of the cell" in science, how the future of medicine will harness the intelligence of cells and their cooperative nature, and much more.</p>





<ul class="wp-block-list"><li><a href="https://www.ourbioverse.com/">William's website</a>.</li><li>Twitter:&nbsp;<a href="https://twitter.com/billmillermd?lang=en">@BillMillerMD</a>.</li><li>Book: <a href="https://amzn.to/3d99TLm">Bioverse: How the Cellular World Contains the Secrets to Life's Biggest Questions</a>.</li></ul>



<p>0:00 - Intro
3:43 - Bioverse
7:29 - Bill's cell appreciation origins
17:03 - Microbiomes
27:01 - Complexity of microbiomes and the "Era of the cell"
46:00 - Robustness
55:05 - Cell vs. human intelligence
1:10:08 - Artificial intelligence
1:21:01 - Neuro-AI
1:25:53 - Hard problem of consciousness</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2025/149.mp3" length="90440241" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Check out my free video series about what's missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.











William B. Miller is an ex-physician turned evolutionary biologist. In this episode, we discuss topics related to his new book, Bioverse: How the Cellular World Contains the Secrets to Life's Biggest Questions. The premise of the book is that all individual cells are intelligent in their own right, and possess a sense of self. From this, Bill makes the case that cells cooperate with other cells to engineer whole organisms that in turn serve as wonderful hosts for the myriad cell types. Further, our bodies are collections of our own cells (with our DNA), and an enormous amount and diversity of foreign cells - our microbiome - that communicate and cooperate with each other and with our own cells. We also discuss how cell intelligence compares to human intelligence, what Bill calls the "era of the cell" in science, how the future of medicine will harness the intelligence of cells and their cooperative nature, and much more.





William's website.Twitter:&nbsp;@BillMillerMD.Book: Bioverse: How the Cellular World Contains the Secrets to Life's Biggest Questions.



0:00 - Intro
3:43 - Bioverse
7:29 - Bill's cell appreciation origins
17:03 - Microbiomes
27:01 - Complexity of microbiomes and the "Era of the cell"
46:00 - Robustness
55:05 - Cell vs. human intelligence
1:10:08 - Artificial intelligence
1:21:01 - Neuro-AI
1:25:53 - Hard problem of consciousness]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2022/10/art-149-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:33:54</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Check out my free video series about what's missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.











William B. Miller is an ex-physician turned evolutionary biologist. In this episode, we discuss topics related to his new book, Bioverse: How the Cellular World Contains the Secrets to Life's Biggest Questions. The premise of the book is that all individual cells are intelligent in their own right, and possess a sense of self. From this, Bill makes the case that cells cooperate with other cells to engineer whole organisms that in turn serve as wonderful hosts for the myriad cell types. Further, our bodies are collections of our own cells (with our DNA), and an enormous amount and diversity of foreign cells - our microbiome - that communicate and cooperate with each other and with our own cells. We also discuss how cell intelligence compares to human intelligence, what Bill calls the "era of the cell" in scienc]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2022/10/art-149-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 148 Gaute Einevoll: Brain Simulations</title>
	<link>https://braininspired.co/podcast/148/</link>
	<pubDate>Sun, 25 Sep 2022 16:13:00 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2021</guid>
	<description><![CDATA[<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>Gaute Einevoll is a professor at the University of Oslo and Norwegian University of Life Sciences. Use develops detailed models of brain networks to use as simulations, so neuroscientists can test their various theories and hypotheses about how networks implement various functions. Thus, the models are tools. The goal is to create models that are multi-level, to test questions at various levels of biological detail; and multi-modal, to predict that handful of signals neuroscientists measure from real brains (something Gaute calls "measurement physics"). We also discuss Gaute's thoughts on <a href="https://braininspired.co/podcast/141/">Carina Curto's "beautiful vs ugly models"</a>, and his reaction to <a href="https://braininspired.co/podcast/147/">Noah Hutton's In Silico documentary</a> about the Blue Brain and Human Brain projects (Gaute has been funded by the Human Brain Project since its inception).</p>



<ul class="wp-block-list"><li><a href="https://www.mn.uio.no/compsci/english/people/supervisors/einevoll.html">Gaute's website</a>.</li><li>Twitter: <a href="https://twitter.com/gauteeinevoll">@GauteEinevoll</a>.</li><li>Related papers:<ul><li><a href="https://www.sciencedirect.com/science/article/pii/S0896627319302909?dgcid=api_sd_search-api-endpoint">The Scientific Case for Brain Simulations</a>.</li><li><a href="https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1010353">Brain signal predictions from multi-scale networks using a linearized framework</a>.</li><li><a href="https://www.biorxiv.org/content/10.1101/2022.02.22.481540v1">Uncovering circuit mechanisms of current sinks and sources with biophysical simulations of primary visual cortex</a></li></ul></li><li><a href="http://LFPy.github.io">LFPy</a>: a Python module for calculation of extracellular potentials from multicompartment neuron models.</li><li>Gaute's <a href="https://vettogvitenskap.no/senseandscience/">Sense and Science</a> podcast.</li></ul>



<p>0:00 - Intro
3:25 - Beautiful and messy models
6:34 - In Silico
9:47 - Goals of human brain project
15:50 - Brain simulation approach
21:35 - Degeneracy in parameters
26:24 - Abstract principles from simulations
32:58 - Models as tools
35:34 - Predicting brain signals
41:45 - LFPs closer to average
53:57 - Plasticity in simulations
56:53 - How detailed should we model neurons?
59:09 - Lessons from predicting signals
1:06:07 - Scaling up
1:10:54 - Simulation as a tool
1:12:35 - Oscillations
1:16:24 - Manifolds and simulations
1:20:22 - Modeling cortex like Hodgkin and Huxley</p>]]></description>
	<itunes:subtitle><![CDATA[Check out my free video series about whats missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.











Gaute Einevoll is a professor at the University of Oslo and Norwegian Univers]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>Gaute Einevoll is a professor at the University of Oslo and Norwegian University of Life Sciences. Use develops detailed models of brain networks to use as simulations, so neuroscientists can test their various theories and hypotheses about how networks implement various functions. Thus, the models are tools. The goal is to create models that are multi-level, to test questions at various levels of biological detail; and multi-modal, to predict that handful of signals neuroscientists measure from real brains (something Gaute calls "measurement physics"). We also discuss Gaute's thoughts on <a href="https://braininspired.co/podcast/141/">Carina Curto's "beautiful vs ugly models"</a>, and his reaction to <a href="https://braininspired.co/podcast/147/">Noah Hutton's In Silico documentary</a> about the Blue Brain and Human Brain projects (Gaute has been funded by the Human Brain Project since its inception).</p>



<ul class="wp-block-list"><li><a href="https://www.mn.uio.no/compsci/english/people/supervisors/einevoll.html">Gaute's website</a>.</li><li>Twitter: <a href="https://twitter.com/gauteeinevoll">@GauteEinevoll</a>.</li><li>Related papers:<ul><li><a href="https://www.sciencedirect.com/science/article/pii/S0896627319302909?dgcid=api_sd_search-api-endpoint">The Scientific Case for Brain Simulations</a>.</li><li><a href="https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1010353">Brain signal predictions from multi-scale networks using a linearized framework</a>.</li><li><a href="https://www.biorxiv.org/content/10.1101/2022.02.22.481540v1">Uncovering circuit mechanisms of current sinks and sources with biophysical simulations of primary visual cortex</a></li></ul></li><li><a href="http://LFPy.github.io">LFPy</a>: a Python module for calculation of extracellular potentials from multicompartment neuron models.</li><li>Gaute's <a href="https://vettogvitenskap.no/senseandscience/">Sense and Science</a> podcast.</li></ul>



<p>0:00 - Intro
3:25 - Beautiful and messy models
6:34 - In Silico
9:47 - Goals of human brain project
15:50 - Brain simulation approach
21:35 - Degeneracy in parameters
26:24 - Abstract principles from simulations
32:58 - Models as tools
35:34 - Predicting brain signals
41:45 - LFPs closer to average
53:57 - Plasticity in simulations
56:53 - How detailed should we model neurons?
59:09 - Lessons from predicting signals
1:06:07 - Scaling up
1:10:54 - Simulation as a tool
1:12:35 - Oscillations
1:16:24 - Manifolds and simulations
1:20:22 - Modeling cortex like Hodgkin and Huxley</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2021/148.mp3" length="86929883" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Check out my free video series about what's missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.











Gaute Einevoll is a professor at the University of Oslo and Norwegian University of Life Sciences. Use develops detailed models of brain networks to use as simulations, so neuroscientists can test their various theories and hypotheses about how networks implement various functions. Thus, the models are tools. The goal is to create models that are multi-level, to test questions at various levels of biological detail; and multi-modal, to predict that handful of signals neuroscientists measure from real brains (something Gaute calls "measurement physics"). We also discuss Gaute's thoughts on Carina Curto's "beautiful vs ugly models", and his reaction to Noah Hutton's In Silico documentary about the Blue Brain and Human Brain projects (Gaute has been funded by the Human Brain Project since its inception).



Gaute's website.Twitter: @GauteEinevoll.Related papers:The Scientific Case for Brain Simulations.Brain signal predictions from multi-scale networks using a linearized framework.Uncovering circuit mechanisms of current sinks and sources with biophysical simulations of primary visual cortexLFPy: a Python module for calculation of extracellular potentials from multicompartment neuron models.Gaute's Sense and Science podcast.



0:00 - Intro
3:25 - Beautiful and messy models
6:34 - In Silico
9:47 - Goals of human brain project
15:50 - Brain simulation approach
21:35 - Degeneracy in parameters
26:24 - Abstract principles from simulations
32:58 - Models as tools
35:34 - Predicting brain signals
41:45 - LFPs closer to average
53:57 - Plasticity in simulations
56:53 - How detailed should we model neurons?
59:09 - Lessons from predicting signals
1:06:07 - Scaling up
1:10:54 - Simulation as a tool
1:12:35 - Oscillations
1:16:24 - Manifolds and simulations
1:20:22 - Modeling cortex like Hodgkin and Huxley]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2022/09/art-148-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:28:48</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Check out my free video series about what's missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.











Gaute Einevoll is a professor at the University of Oslo and Norwegian University of Life Sciences. Use develops detailed models of brain networks to use as simulations, so neuroscientists can test their various theories and hypotheses about how networks implement various functions. Thus, the models are tools. The goal is to create models that are multi-level, to test questions at various levels of biological detail; and multi-modal, to predict that handful of signals neuroscientists measure from real brains (something Gaute calls "measurement physics"). We also discuss Gaute's thoughts on Carina Curto's "beautiful vs ugly models", and his reaction to Noah Hutton's In Silico documentary about the Blue Brain and Human Brain projects (Gaute has been funded by the Human Brain Project since its inception).



Gaute]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2022/09/art-148-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 147 Noah Hutton: In Silico</title>
	<link>https://braininspired.co/podcast/147/</link>
	<pubDate>Tue, 13 Sep 2022 15:11:05 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2013</guid>
	<description><![CDATA[<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>Noah Hutton writes, directs, and scores documentary and narrative films. On this episode, we discuss his documentary In Silico. In 2009, Noah watched a TED talk by Henry Markram, in which Henry claimed it would take 10 years to fully simulate a human brain. This claim inspired Noah to chronicle the project, visiting Henry and his team periodically throughout. The result was In Silico, which tells the science, human, and social story of Henry's massively funded projects - the Blue Brain Project and the Human Brain Project.</p>





<ul class="wp-block-list"><li><a href="https://insilicofilm.com/">In Silico website</a>.<ul><li><a href="https://vimeo.com/ondemand/insilico2" target="_blank" rel="noreferrer noopener">Rent or buy In Silico</a>.</li></ul></li><li><a href="http://noahhutton.com">Noah's website</a>.</li><li>Twitter:&nbsp;<a href="https://twitter.com/noah_hutton">@noah_hutton</a>.</li></ul>



<p>0:00 - Intro
3:36 - Release and premier
7:37 - Noah's background
9:52 - Origins of In Silico
19:39 - Recurring visits
22:13 - Including the critics
25:22 - Markram's shifting outlook and salesmanship
35:43 - Promises and delivery
41:28 - Computer and brain terms interchange
49:22 - Progress vs. illusion of progress
52:19 - Close to quitting
58:01 - Salesmanship vs bad at estimating timelines
1:02:12 - Brain simulation science
1:11:19 - AGI
1:14:48 - Brain simulation vs. neuro-AI
1:21:03 - Opinion on TED talks
1:25:16 - Hero worship
1:29:03 - Feedback on In Silico</p>]]></description>
	<itunes:subtitle><![CDATA[Check out my free video series about whats missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.











Noah Hutton writes, directs, and scores documentary and narrative films. On t]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>Noah Hutton writes, directs, and scores documentary and narrative films. On this episode, we discuss his documentary In Silico. In 2009, Noah watched a TED talk by Henry Markram, in which Henry claimed it would take 10 years to fully simulate a human brain. This claim inspired Noah to chronicle the project, visiting Henry and his team periodically throughout. The result was In Silico, which tells the science, human, and social story of Henry's massively funded projects - the Blue Brain Project and the Human Brain Project.</p>





<ul class="wp-block-list"><li><a href="https://insilicofilm.com/">In Silico website</a>.<ul><li><a href="https://vimeo.com/ondemand/insilico2" target="_blank" rel="noreferrer noopener">Rent or buy In Silico</a>.</li></ul></li><li><a href="http://noahhutton.com">Noah's website</a>.</li><li>Twitter:&nbsp;<a href="https://twitter.com/noah_hutton">@noah_hutton</a>.</li></ul>



<p>0:00 - Intro
3:36 - Release and premier
7:37 - Noah's background
9:52 - Origins of In Silico
19:39 - Recurring visits
22:13 - Including the critics
25:22 - Markram's shifting outlook and salesmanship
35:43 - Promises and delivery
41:28 - Computer and brain terms interchange
49:22 - Progress vs. illusion of progress
52:19 - Close to quitting
58:01 - Salesmanship vs bad at estimating timelines
1:02:12 - Brain simulation science
1:11:19 - AGI
1:14:48 - Brain simulation vs. neuro-AI
1:21:03 - Opinion on TED talks
1:25:16 - Hero worship
1:29:03 - Feedback on In Silico</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2013/147.mp3" length="94725085" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Check out my free video series about what's missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.











Noah Hutton writes, directs, and scores documentary and narrative films. On this episode, we discuss his documentary In Silico. In 2009, Noah watched a TED talk by Henry Markram, in which Henry claimed it would take 10 years to fully simulate a human brain. This claim inspired Noah to chronicle the project, visiting Henry and his team periodically throughout. The result was In Silico, which tells the science, human, and social story of Henry's massively funded projects - the Blue Brain Project and the Human Brain Project.





In Silico website.Rent or buy In Silico.Noah's website.Twitter:&nbsp;@noah_hutton.



0:00 - Intro
3:36 - Release and premier
7:37 - Noah's background
9:52 - Origins of In Silico
19:39 - Recurring visits
22:13 - Including the critics
25:22 - Markram's shifting outlook and salesmanship
35:43 - Promises and delivery
41:28 - Computer and brain terms interchange
49:22 - Progress vs. illusion of progress
52:19 - Close to quitting
58:01 - Salesmanship vs bad at estimating timelines
1:02:12 - Brain simulation science
1:11:19 - AGI
1:14:48 - Brain simulation vs. neuro-AI
1:21:03 - Opinion on TED talks
1:25:16 - Hero worship
1:29:03 - Feedback on In Silico]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2022/09/art-147-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:37:08</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Check out my free video series about what's missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.











Noah Hutton writes, directs, and scores documentary and narrative films. On this episode, we discuss his documentary In Silico. In 2009, Noah watched a TED talk by Henry Markram, in which Henry claimed it would take 10 years to fully simulate a human brain. This claim inspired Noah to chronicle the project, visiting Henry and his team periodically throughout. The result was In Silico, which tells the science, human, and social story of Henry's massively funded projects - the Blue Brain Project and the Human Brain Project.





In Silico website.Rent or buy In Silico.Noah's website.Twitter:&nbsp;@noah_hutton.



0:00 - Intro
3:36 - Release and premier
7:37 - Noah's background
9:52 - Origins of In Silico
19:39 - Recurring visits
22:13 - Including the critics
25:22 - Markram's shifting outlook and salesmanship
35]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2022/09/art-147-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 146 Lauren Ross: Causal and Non-Causal Explanation</title>
	<link>https://braininspired.co/podcast/146/</link>
	<pubDate>Wed, 07 Sep 2022 14:35:40 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=2008</guid>
	<description><![CDATA[<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>Lauren Ross is an Associate Professor at the University of California, Irvine. She studies and writes about causal and non-causal explanations in philosophy of science, including distinctions among causal structures. Throughout her work, Lauren employs Jame's Woodward's interventionist approach to causation, which <a href="https://braininspired.co/podcast/145/">Jim and I discussed in episode 145</a>. In this episode, we discuss Jim's lasting impact on the philosophy of causation, the current dominance of mechanistic explanation and its relation to causation, and various causal structures of explanation, including pathways, cascades, topology, and constraints.</p>



<ul class="wp-block-list"><li><a href="https://www.lps.uci.edu/~rossl/">Lauren's website</a>.</li><li>Twitter:&nbsp;<a href="https://twitter.com/ProfLaurenRoss">@ProfLaurenRoss</a></li><li>Related papers<ul><li><a href="https://ekmillerlab.mit.edu/wp-content/uploads/2022/07/A-call-for-more-clarity-around-causality-in-neuroscience-TINS-2022.pdf">A call for more clarity around causality in neuroscience</a>.</li><li><a href="http://philsci-archive.pitt.edu/18504/1/Constraints_Ross.pdf">The explanatory nature of constraints: Law-based, mathematical, and causal</a>.</li><li><a href="http://philsci-archive.pitt.edu/14432/1/Mech_Path_.pdf">Causal Concepts in Biology: How Pathways Differ from Mechanisms and Why It Matters</a>.</li><li><a href="https://www.lps.uci.edu/~rossl/A11_Ross.pdf">Distinguishing topological and causal explanation</a>.</li><li><a href="https://www.lps.uci.edu/~rossl/A9_Ross.pdf">Multiple Realizability from a Causal Perspective</a>.</li><li><a href="http://philsci-archive.pitt.edu/20215/1/Ross_Cascade.pdf">Cascade versus mechanism: The diversity of causal structure in science</a>.</li></ul></li></ul>



<p>0:00 - Intro
2:46 - Lauren's background
10:14 - Jim Woodward legacy
15:37 - Golden era of causality
18:56 - Mechanistic explanation
28:51 - Pathways
31:41 - Cascades
36:25 - Topology
41:17 - Constraint
50:44 - Hierarchy of explanations
53:18 - Structure and function
57:49 - Brain and mind
1:01:28 - Reductionism
1:07:58 - Constraint again
1:14:38 - Multiple realizability</p>]]></description>
	<itunes:subtitle><![CDATA[Check out my free video series about whats missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.











Lauren Ross is an Associate Professor at the University of California, Irvine]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>Lauren Ross is an Associate Professor at the University of California, Irvine. She studies and writes about causal and non-causal explanations in philosophy of science, including distinctions among causal structures. Throughout her work, Lauren employs Jame's Woodward's interventionist approach to causation, which <a href="https://braininspired.co/podcast/145/">Jim and I discussed in episode 145</a>. In this episode, we discuss Jim's lasting impact on the philosophy of causation, the current dominance of mechanistic explanation and its relation to causation, and various causal structures of explanation, including pathways, cascades, topology, and constraints.</p>



<ul class="wp-block-list"><li><a href="https://www.lps.uci.edu/~rossl/">Lauren's website</a>.</li><li>Twitter:&nbsp;<a href="https://twitter.com/ProfLaurenRoss">@ProfLaurenRoss</a></li><li>Related papers<ul><li><a href="https://ekmillerlab.mit.edu/wp-content/uploads/2022/07/A-call-for-more-clarity-around-causality-in-neuroscience-TINS-2022.pdf">A call for more clarity around causality in neuroscience</a>.</li><li><a href="http://philsci-archive.pitt.edu/18504/1/Constraints_Ross.pdf">The explanatory nature of constraints: Law-based, mathematical, and causal</a>.</li><li><a href="http://philsci-archive.pitt.edu/14432/1/Mech_Path_.pdf">Causal Concepts in Biology: How Pathways Differ from Mechanisms and Why It Matters</a>.</li><li><a href="https://www.lps.uci.edu/~rossl/A11_Ross.pdf">Distinguishing topological and causal explanation</a>.</li><li><a href="https://www.lps.uci.edu/~rossl/A9_Ross.pdf">Multiple Realizability from a Causal Perspective</a>.</li><li><a href="http://philsci-archive.pitt.edu/20215/1/Ross_Cascade.pdf">Cascade versus mechanism: The diversity of causal structure in science</a>.</li></ul></li></ul>



<p>0:00 - Intro
2:46 - Lauren's background
10:14 - Jim Woodward legacy
15:37 - Golden era of causality
18:56 - Mechanistic explanation
28:51 - Pathways
31:41 - Cascades
36:25 - Topology
41:17 - Constraint
50:44 - Hierarchy of explanations
53:18 - Structure and function
57:49 - Brain and mind
1:01:28 - Reductionism
1:07:58 - Constraint again
1:14:38 - Multiple realizability</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/2008/146.mp3" length="79834106" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Check out my free video series about what's missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.











Lauren Ross is an Associate Professor at the University of California, Irvine. She studies and writes about causal and non-causal explanations in philosophy of science, including distinctions among causal structures. Throughout her work, Lauren employs Jame's Woodward's interventionist approach to causation, which Jim and I discussed in episode 145. In this episode, we discuss Jim's lasting impact on the philosophy of causation, the current dominance of mechanistic explanation and its relation to causation, and various causal structures of explanation, including pathways, cascades, topology, and constraints.



Lauren's website.Twitter:&nbsp;@ProfLaurenRossRelated papersA call for more clarity around causality in neuroscience.The explanatory nature of constraints: Law-based, mathematical, and causal.Causal Concepts in Biology: How Pathways Differ from Mechanisms and Why It Matters.Distinguishing topological and causal explanation.Multiple Realizability from a Causal Perspective.Cascade versus mechanism: The diversity of causal structure in science.



0:00 - Intro
2:46 - Lauren's background
10:14 - Jim Woodward legacy
15:37 - Golden era of causality
18:56 - Mechanistic explanation
28:51 - Pathways
31:41 - Cascades
36:25 - Topology
41:17 - Constraint
50:44 - Hierarchy of explanations
53:18 - Structure and function
57:49 - Brain and mind
1:01:28 - Reductionism
1:07:58 - Constraint again
1:14:38 - Multiple realizability]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2022/09/art-146-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:22:51</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Check out my free video series about what's missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.











Lauren Ross is an Associate Professor at the University of California, Irvine. She studies and writes about causal and non-causal explanations in philosophy of science, including distinctions among causal structures. Throughout her work, Lauren employs Jame's Woodward's interventionist approach to causation, which Jim and I discussed in episode 145. In this episode, we discuss Jim's lasting impact on the philosophy of causation, the current dominance of mechanistic explanation and its relation to causation, and various causal structures of explanation, including pathways, cascades, topology, and constraints.



Lauren's website.Twitter:&nbsp;@ProfLaurenRossRelated papersA call for more clarity around causality in neuroscience.The explanatory nature of constraints: Law-based, mathematical, and causal.Causal Con]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2022/09/art-146-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 145 James Woodward: Causation with a Human Face</title>
	<link>https://braininspired.co/podcast/145/</link>
	<pubDate>Sun, 28 Aug 2022 21:03:37 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1995</guid>
	<description><![CDATA[<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>James Woodward is a recently retired Professor from the Department of History and Philosophy of Science at the University of Pittsburgh. Jim has tremendously influenced the field of causal explanation in the philosophy of science. His account of causation centers around intervention - intervening on a cause should alter its effect. From this minimal notion, Jim has described many facets and varieties of causal structures. In this episode, we discuss topics from his recent book, <a href="https://amzn.to/3pMAwZv">Causation with a Human Face: Normative Theory and Descriptive Psychology</a>. In the book, Jim advocates that how we <em>should</em> think about causality - the normative - needs to be studied together with how we actually <em>do</em> think about causal relations in the world - the descriptive. We discuss many topics around this central notion, epistemology versus metaphysics, the the nature and varieties of causal structures.</p>





<ul class="wp-block-list"><li><a href="https://www.jameswoodward.org/">Jim's website</a>.</li><li><a href="https://amzn.to/3QxPMEZ">Making Things Happen: A Theory of Causal Explanation</a>.</li><li><a href="https://amzn.to/3pMAwZv">Causation with a Human Face: Normative Theory and Descriptive Psychology</a>.</li></ul>



<p>0:00 - Intro
4:14 - Causation with a Human Face &amp; Functionalist approach
6:16 - Interventionist causality; Epistemology and metaphysics
9:35 - Normative and descriptive
14:02 - Rationalist approach
20:24 - Normative vs. descriptive
28:00 - Varying notions of causation
33:18 - Invariance
41:05 - Causality in complex systems
47:09 - Downward causation
51:14 - Natural laws
56:38 - Proportionality
1:01:12 - Intuitions
1:10:59 - Normative and descriptive relation
1:17:33 - Causality across disciplines
1:21:26 - What would help our understanding of causation</p>]]></description>
	<itunes:subtitle><![CDATA[Check out my free video series about whats missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.











James Woodward is a recently retired Professor from the Department of History]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>James Woodward is a recently retired Professor from the Department of History and Philosophy of Science at the University of Pittsburgh. Jim has tremendously influenced the field of causal explanation in the philosophy of science. His account of causation centers around intervention - intervening on a cause should alter its effect. From this minimal notion, Jim has described many facets and varieties of causal structures. In this episode, we discuss topics from his recent book, <a href="https://amzn.to/3pMAwZv">Causation with a Human Face: Normative Theory and Descriptive Psychology</a>. In the book, Jim advocates that how we <em>should</em> think about causality - the normative - needs to be studied together with how we actually <em>do</em> think about causal relations in the world - the descriptive. We discuss many topics around this central notion, epistemology versus metaphysics, the the nature and varieties of causal structures.</p>





<ul class="wp-block-list"><li><a href="https://www.jameswoodward.org/">Jim's website</a>.</li><li><a href="https://amzn.to/3QxPMEZ">Making Things Happen: A Theory of Causal Explanation</a>.</li><li><a href="https://amzn.to/3pMAwZv">Causation with a Human Face: Normative Theory and Descriptive Psychology</a>.</li></ul>



<p>0:00 - Intro
4:14 - Causation with a Human Face &amp; Functionalist approach
6:16 - Interventionist causality; Epistemology and metaphysics
9:35 - Normative and descriptive
14:02 - Rationalist approach
20:24 - Normative vs. descriptive
28:00 - Varying notions of causation
33:18 - Invariance
41:05 - Causality in complex systems
47:09 - Downward causation
51:14 - Natural laws
56:38 - Proportionality
1:01:12 - Intuitions
1:10:59 - Normative and descriptive relation
1:17:33 - Causality across disciplines
1:21:26 - What would help our understanding of causation</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1995/145.mp3" length="82739467" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Check out my free video series about what's missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.











James Woodward is a recently retired Professor from the Department of History and Philosophy of Science at the University of Pittsburgh. Jim has tremendously influenced the field of causal explanation in the philosophy of science. His account of causation centers around intervention - intervening on a cause should alter its effect. From this minimal notion, Jim has described many facets and varieties of causal structures. In this episode, we discuss topics from his recent book, Causation with a Human Face: Normative Theory and Descriptive Psychology. In the book, Jim advocates that how we should think about causality - the normative - needs to be studied together with how we actually do think about causal relations in the world - the descriptive. We discuss many topics around this central notion, epistemology versus metaphysics, the the nature and varieties of causal structures.





Jim's website.Making Things Happen: A Theory of Causal Explanation.Causation with a Human Face: Normative Theory and Descriptive Psychology.



0:00 - Intro
4:14 - Causation with a Human Face &amp; Functionalist approach
6:16 - Interventionist causality; Epistemology and metaphysics
9:35 - Normative and descriptive
14:02 - Rationalist approach
20:24 - Normative vs. descriptive
28:00 - Varying notions of causation
33:18 - Invariance
41:05 - Causality in complex systems
47:09 - Downward causation
51:14 - Natural laws
56:38 - Proportionality
1:01:12 - Intuitions
1:10:59 - Normative and descriptive relation
1:17:33 - Causality across disciplines
1:21:26 - What would help our understanding of causation]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2022/08/art-145-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:25:52</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Check out my free video series about what's missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.











James Woodward is a recently retired Professor from the Department of History and Philosophy of Science at the University of Pittsburgh. Jim has tremendously influenced the field of causal explanation in the philosophy of science. His account of causation centers around intervention - intervening on a cause should alter its effect. From this minimal notion, Jim has described many facets and varieties of causal structures. In this episode, we discuss topics from his recent book, Causation with a Human Face: Normative Theory and Descriptive Psychology. In the book, Jim advocates that how we should think about causality - the normative - needs to be studied together with how we actually do think about causal relations in the world - the descriptive. We discuss many topics around this central notion, epistemology ]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2022/08/art-145-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 144 Emily M. Bender and Ev Fedorenko: Large Language Models</title>
	<link>https://braininspired.co/podcast/144/</link>
	<pubDate>Wed, 17 Aug 2022 16:25:00 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1984</guid>
	<description><![CDATA[<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my short video series about what's missing in AI and Neuroscience.</a></p>





<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Large language models, often now called "foundation models", are the model de jour in AI, based on the <a href="https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)">transformer architecture</a>. In this episode, I bring together Evelina Fedorenko and Emily M. Bender to discuss how language models stack up to our own language processing and generation (models and brains both excel at next-word prediction), whether language evolved in humans for complex thoughts or for communication (communication, says Ev), whether language models grasp the meaning of the text they produce (Emily says no), and much more.</p>





<p>Evelina Fedorenko is a cognitive scientist who runs the EvLab at MIT. She studies the neural basis of language. Her lab has amassed a large amount of data suggesting language did not evolve to help us think complex thoughts, as Noam Chomsky has argued, but rather for efficient communication. She has also recently been comparing the activity in language models to activity in our brain's language network, finding commonality in the ability to predict upcoming words.</p>





<p>Emily M. Bender is a computational linguist at University of Washington. Recently she has been considering questions about whether language models understand the meaning of the language they produce (no), whether we should be scaling language models as is the current practice (not really), how linguistics can inform language models, and more.</p>



<ul class="wp-block-list"><li><a href="http://evlab.mit.edu/">EvLab</a>.</li><li><a href="http://faculty.washington.edu/ebender/">Emily's website</a>.</li><li>Twitter:&nbsp;<a href="https://twitter.com/ev_fedorenko">@ev_fedorenko</a>; <a href="https://twitter.com/emilymbender">@emilymbender</a>.</li><li>Related papers<ul><li><a href="http://evlab.mit.edu/assets/papers/Fedorenko_%26_Varley_2016_ANYAS.pdf">Language and thought are not the same thing: Evidence from neuroimaging and neurological patients</a>. (Fedorenko)</li><li><a href="http://evlab.mit.edu/assets/papers/Schrimpf_et_al_2021_PNAS.pdf">The neural architecture of language: Integrative modeling converges on predictive processing.</a> (Fedorenko)</li><li><a href="https://dl.acm.org/doi/10.1145/3442188.3445922">On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?</a> (Bender)</li><li><a href="https://aclanthology.org/2020.acl-main.463/">Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data</a> (Bender)</li></ul></li></ul>



<p>0:00 - Intro
4:35 - Language and cognition
15:38 - Grasping for meaning
21:32 - Are large language models producing language?
23:09 - Next-word prediction in brains and models
32:09 - Interface between language and thought
35:18 - Studying language in nonhuman animals
41:54 - Do we understand language enough?
45:51 - What do language models need?
51:45 - Are LLMs teaching us about language?
54:56 - Is meaning necessary, and does it matter how we learn language?
1:00:04 - Is our biology important for language?
1:04:59 - Future outlook</p>]]></description>
	<itunes:subtitle><![CDATA[Check out my short video series about whats missing in AI and Neuroscience.





Support the show to get full episodes, full archive, and join the Discord community.









Large language models, often now called foundation models, are the model de jou]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my short video series about what's missing in AI and Neuroscience.</a></p>





<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Large language models, often now called "foundation models", are the model de jour in AI, based on the <a href="https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)">transformer architecture</a>. In this episode, I bring together Evelina Fedorenko and Emily M. Bender to discuss how language models stack up to our own language processing and generation (models and brains both excel at next-word prediction), whether language evolved in humans for complex thoughts or for communication (communication, says Ev), whether language models grasp the meaning of the text they produce (Emily says no), and much more.</p>





<p>Evelina Fedorenko is a cognitive scientist who runs the EvLab at MIT. She studies the neural basis of language. Her lab has amassed a large amount of data suggesting language did not evolve to help us think complex thoughts, as Noam Chomsky has argued, but rather for efficient communication. She has also recently been comparing the activity in language models to activity in our brain's language network, finding commonality in the ability to predict upcoming words.</p>





<p>Emily M. Bender is a computational linguist at University of Washington. Recently she has been considering questions about whether language models understand the meaning of the language they produce (no), whether we should be scaling language models as is the current practice (not really), how linguistics can inform language models, and more.</p>



<ul class="wp-block-list"><li><a href="http://evlab.mit.edu/">EvLab</a>.</li><li><a href="http://faculty.washington.edu/ebender/">Emily's website</a>.</li><li>Twitter:&nbsp;<a href="https://twitter.com/ev_fedorenko">@ev_fedorenko</a>; <a href="https://twitter.com/emilymbender">@emilymbender</a>.</li><li>Related papers<ul><li><a href="http://evlab.mit.edu/assets/papers/Fedorenko_%26_Varley_2016_ANYAS.pdf">Language and thought are not the same thing: Evidence from neuroimaging and neurological patients</a>. (Fedorenko)</li><li><a href="http://evlab.mit.edu/assets/papers/Schrimpf_et_al_2021_PNAS.pdf">The neural architecture of language: Integrative modeling converges on predictive processing.</a> (Fedorenko)</li><li><a href="https://dl.acm.org/doi/10.1145/3442188.3445922">On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?</a> (Bender)</li><li><a href="https://aclanthology.org/2020.acl-main.463/">Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data</a> (Bender)</li></ul></li></ul>



<p>0:00 - Intro
4:35 - Language and cognition
15:38 - Grasping for meaning
21:32 - Are large language models producing language?
23:09 - Next-word prediction in brains and models
32:09 - Interface between language and thought
35:18 - Studying language in nonhuman animals
41:54 - Do we understand language enough?
45:51 - What do language models need?
51:45 - Are LLMs teaching us about language?
54:56 - Is meaning necessary, and does it matter how we learn language?
1:00:04 - Is our biology important for language?
1:04:59 - Future outlook</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1984/144.mp3" length="69116190" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Check out my short video series about what's missing in AI and Neuroscience.





Support the show to get full episodes, full archive, and join the Discord community.









Large language models, often now called "foundation models", are the model de jour in AI, based on the transformer architecture. In this episode, I bring together Evelina Fedorenko and Emily M. Bender to discuss how language models stack up to our own language processing and generation (models and brains both excel at next-word prediction), whether language evolved in humans for complex thoughts or for communication (communication, says Ev), whether language models grasp the meaning of the text they produce (Emily says no), and much more.





Evelina Fedorenko is a cognitive scientist who runs the EvLab at MIT. She studies the neural basis of language. Her lab has amassed a large amount of data suggesting language did not evolve to help us think complex thoughts, as Noam Chomsky has argued, but rather for efficient communication. She has also recently been comparing the activity in language models to activity in our brain's language network, finding commonality in the ability to predict upcoming words.





Emily M. Bender is a computational linguist at University of Washington. Recently she has been considering questions about whether language models understand the meaning of the language they produce (no), whether we should be scaling language models as is the current practice (not really), how linguistics can inform language models, and more.



EvLab.Emily's website.Twitter:&nbsp;@ev_fedorenko; @emilymbender.Related papersLanguage and thought are not the same thing: Evidence from neuroimaging and neurological patients. (Fedorenko)The neural architecture of language: Integrative modeling converges on predictive processing. (Fedorenko)On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? (Bender)Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data (Bender)



0:00 - Intro
4:35 - Language and cognition
15:38 - Grasping for meaning
21:32 - Are large language models producing language?
23:09 - Next-word prediction in brains and models
32:09 - Interface between language and thought
35:18 - Studying language in nonhuman animals
41:54 - Do we understand language enough?
45:51 - What do language models need?
51:45 - Are LLMs teaching us about language?
54:56 - Is meaning necessary, and does it matter how we learn language?
1:00:04 - Is our biology important for language?
1:04:59 - Future outlook]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2022/08/art-144-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:11:41</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Check out my short video series about what's missing in AI and Neuroscience.





Support the show to get full episodes, full archive, and join the Discord community.









Large language models, often now called "foundation models", are the model de jour in AI, based on the transformer architecture. In this episode, I bring together Evelina Fedorenko and Emily M. Bender to discuss how language models stack up to our own language processing and generation (models and brains both excel at next-word prediction), whether language evolved in humans for complex thoughts or for communication (communication, says Ev), whether language models grasp the meaning of the text they produce (Emily says no), and much more.





Evelina Fedorenko is a cognitive scientist who runs the EvLab at MIT. She studies the neural basis of language. Her lab has amassed a large amount of data suggesting language did not evolve to help us think complex thoughts, as Noam Chomsky has argued, but rather for effic]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2022/08/art-144-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 143 Rodolphe Sepulchre: Mixed Feedback Control</title>
	<link>https://braininspired.co/podcast/143/</link>
	<pubDate>Fri, 05 Aug 2022 23:15:10 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1981</guid>
	<description><![CDATA[<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>Rodolphe Sepulchre is a control engineer and theorist at Cambridge University. He focuses on applying feedback control engineering principles to build circuits that model neurons and neuronal circuits. We discuss his work on mixed feedback control - positive and negative - as an underlying principle of the mixed digital and analog brain signals,, the role of neuromodulation as a controller, applying these principles to Eve Marder's lobster/crab neural circuits, building mixed-feedback neuromorphics, some feedback control history, and how "If you wish to contribute original work, be prepared to face loneliness," among other topics.</p>



<ul class="wp-block-list"><li><a href="https://sites.google.com/site/rsepulchre/home">Rodolphe's website</a>.</li><li>Related papers<ul><li><a href="https://arxiv.org/abs/2112.03565">Spiking Control Systems</a>.</li><li><a href="https://www.annualreviews.org/doi/full/10.1146/annurev-control-053018-023708">Control Across Scales by Positive and Negative Feedback</a>.</li><li><a href="https://www.google.com/url?q=https%3A%2F%2Fwww.dropbox.com%2Fs%2Fljouwdgrs06sx8e%2FNeuromorphic_Control_Designing_Multiscale_Mixed-Feedback_Systems.pdf%3Fdl%3D0&amp;sa=D&amp;sntz=1&amp;usg=AOvVaw0WM3AsKkZ9LeYw9xArCJy-">Neuromorphic control</a>. (<a href="http://www.google.com/url?q=http%3A%2F%2Farxiv.org%2Fabs%2F2011.04441&amp;sa=D&amp;sntz=1&amp;usg=AOvVaw2dfjmxFAZvv6glFuTGm3IG">arXiv version</a>)</li></ul></li><li>Related episodes:<ul><li><a href="https://braininspired.co/podcast/130/">BI 130 Eve Marder: Modulation of Networks</a></li><li><a href="https://braininspired.co/podcast/119/">BI 119 Henry Yin: The Crisis in Neuroscience</a></li></ul></li></ul>



<p>0:00 - Intro
4:38 - Control engineer
9:52 - Control vs. dynamical systems
13:34 - Building vs. understanding
17:38 - Mixed feedback signals
26:00 - Robustness
28:28 - Eve Marder
32:00 - Loneliness
37:35 - Across levels
44:04 - Neuromorphics and neuromodulation
52:15 - Barrier to adopting neuromorphics
54:40 - Deep learning influence
58:04 - Beyond energy efficiency
1:02:02 - Deep learning for neuro
1:14:15 - Role of philosophy
1:16:43 - Doing it right</p>]]></description>
	<itunes:subtitle><![CDATA[Check out my free video series about whats missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.











Rodolphe Sepulchre is a control engineer and theorist at Cambridge University]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>Rodolphe Sepulchre is a control engineer and theorist at Cambridge University. He focuses on applying feedback control engineering principles to build circuits that model neurons and neuronal circuits. We discuss his work on mixed feedback control - positive and negative - as an underlying principle of the mixed digital and analog brain signals,, the role of neuromodulation as a controller, applying these principles to Eve Marder's lobster/crab neural circuits, building mixed-feedback neuromorphics, some feedback control history, and how "If you wish to contribute original work, be prepared to face loneliness," among other topics.</p>



<ul class="wp-block-list"><li><a href="https://sites.google.com/site/rsepulchre/home">Rodolphe's website</a>.</li><li>Related papers<ul><li><a href="https://arxiv.org/abs/2112.03565">Spiking Control Systems</a>.</li><li><a href="https://www.annualreviews.org/doi/full/10.1146/annurev-control-053018-023708">Control Across Scales by Positive and Negative Feedback</a>.</li><li><a href="https://www.google.com/url?q=https%3A%2F%2Fwww.dropbox.com%2Fs%2Fljouwdgrs06sx8e%2FNeuromorphic_Control_Designing_Multiscale_Mixed-Feedback_Systems.pdf%3Fdl%3D0&amp;sa=D&amp;sntz=1&amp;usg=AOvVaw0WM3AsKkZ9LeYw9xArCJy-">Neuromorphic control</a>. (<a href="http://www.google.com/url?q=http%3A%2F%2Farxiv.org%2Fabs%2F2011.04441&amp;sa=D&amp;sntz=1&amp;usg=AOvVaw2dfjmxFAZvv6glFuTGm3IG">arXiv version</a>)</li></ul></li><li>Related episodes:<ul><li><a href="https://braininspired.co/podcast/130/">BI 130 Eve Marder: Modulation of Networks</a></li><li><a href="https://braininspired.co/podcast/119/">BI 119 Henry Yin: The Crisis in Neuroscience</a></li></ul></li></ul>



<p>0:00 - Intro
4:38 - Control engineer
9:52 - Control vs. dynamical systems
13:34 - Building vs. understanding
17:38 - Mixed feedback signals
26:00 - Robustness
28:28 - Eve Marder
32:00 - Loneliness
37:35 - Across levels
44:04 - Neuromorphics and neuromodulation
52:15 - Barrier to adopting neuromorphics
54:40 - Deep learning influence
58:04 - Beyond energy efficiency
1:02:02 - Deep learning for neuro
1:14:15 - Role of philosophy
1:16:43 - Doing it right</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1981/143.mp3" length="81786561" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Check out my free video series about what's missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.











Rodolphe Sepulchre is a control engineer and theorist at Cambridge University. He focuses on applying feedback control engineering principles to build circuits that model neurons and neuronal circuits. We discuss his work on mixed feedback control - positive and negative - as an underlying principle of the mixed digital and analog brain signals,, the role of neuromodulation as a controller, applying these principles to Eve Marder's lobster/crab neural circuits, building mixed-feedback neuromorphics, some feedback control history, and how "If you wish to contribute original work, be prepared to face loneliness," among other topics.



Rodolphe's website.Related papersSpiking Control Systems.Control Across Scales by Positive and Negative Feedback.Neuromorphic control. (arXiv version)Related episodes:BI 130 Eve Marder: Modulation of NetworksBI 119 Henry Yin: The Crisis in Neuroscience



0:00 - Intro
4:38 - Control engineer
9:52 - Control vs. dynamical systems
13:34 - Building vs. understanding
17:38 - Mixed feedback signals
26:00 - Robustness
28:28 - Eve Marder
32:00 - Loneliness
37:35 - Across levels
44:04 - Neuromorphics and neuromodulation
52:15 - Barrier to adopting neuromorphics
54:40 - Deep learning influence
58:04 - Beyond energy efficiency
1:02:02 - Deep learning for neuro
1:14:15 - Role of philosophy
1:16:43 - Doing it right]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2022/08/art-143-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:24:53</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Check out my free video series about what's missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.











Rodolphe Sepulchre is a control engineer and theorist at Cambridge University. He focuses on applying feedback control engineering principles to build circuits that model neurons and neuronal circuits. We discuss his work on mixed feedback control - positive and negative - as an underlying principle of the mixed digital and analog brain signals,, the role of neuromodulation as a controller, applying these principles to Eve Marder's lobster/crab neural circuits, building mixed-feedback neuromorphics, some feedback control history, and how "If you wish to contribute original work, be prepared to face loneliness," among other topics.



Rodolphe's website.Related papersSpiking Control Systems.Control Across Scales by Positive and Negative Feedback.Neuromorphic control. (arXiv version)Related episodes:BI 130 Eve M]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2022/08/art-143-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 142 Cameron Buckner: The New DoGMA</title>
	<link>https://braininspired.co/podcast/142/</link>
	<pubDate>Tue, 26 Jul 2022 17:54:31 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1977</guid>
	<description><![CDATA[<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>Cameron Buckner is a philosopher and cognitive scientist at The University of Houston. He is writing a book about the age-old philosophical debate on how much of our knowledge is innate (nature, rationalism) versus how much is learned (nurture, empiricism). In the book and his other works, Cameron argues that modern AI can help settle the debate. In particular, he suggests we focus on what types of psychological "domain-general faculties" underlie our own intelligence, and how different kinds of deep learning models are revealing how those faculties may be implemented in our brains. The hope is that by building systems that possess the right handful of faculties, and putting those systems together in a way they can cooperate in a general and flexible manner, it will result in cognitive architectures we would call intelligent. Thus, what Cameron calls The New DoGMA: Domain-General Modular Architecture. We also discuss his work on mental representation and how representations get their content - how our thoughts connect to the natural external world.&nbsp;</p>



<ul class="wp-block-list"><li><a href="http://cameronbuckner.net/professional/index.htm">Cameron's Website</a>.</li><li>Twitter:&nbsp;<a href="https://twitter.com/cameronjbuckner">@cameronjbuckner</a>.</li><li>Related papers<ul><li><a href="http://cameronbuckner.net/professional/deeplearning.pdf">Empiricism without Magic: Transformational Abstraction in Deep Convolutional Neural Networks</a>.</li><li><a href="http://cameronbuckner.net/professional/forwardlooking.pdf">A Forward-Looking Theory of Content</a>.</li></ul></li><li>Other sources Cameron mentions:<ul><li><a href="https://arxiv.org/abs/1801.05667">Innateness, AlphaZero, and Artificial Intelligence (Gary Marcus)</a>.</li><li><a href="http://causality.cs.ucla.edu/blog/index.php/2020/07/26/radical-empiricism-and-machine-learning-research/">Radical Empiricism and Machine Learning Research (Judea Pearl)</a>.</li><li><a href="https://link.springer.com/article/10.1007/s11229-021-03028-4">Fodor’s guide to the Humean mind (Tamás Demeter)</a>.</li></ul></li></ul>



<p>0:00 - Intro
4:55 - Interpreting old philosophy
8:26 - AI and philosophy
17:00 - Empiricism vs. rationalism
27:09 - Domain-general faculties
33:10 - Faculty psychology
40:28 - New faculties?
46:11 - Human faculties
51:15 - Cognitive architectures
56:26 - Language
1:01:40 - Beyond dichotomous thinking
1:04:08 - Lower-level faculties
1:10:16 - Animal cognition
1:14:31 - A Forward-Looking Theory of Content</p>]]></description>
	<itunes:subtitle><![CDATA[Check out my free video series about whats missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.











Cameron Buckner is a philosopher and cognitive scientist at The University of]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>Cameron Buckner is a philosopher and cognitive scientist at The University of Houston. He is writing a book about the age-old philosophical debate on how much of our knowledge is innate (nature, rationalism) versus how much is learned (nurture, empiricism). In the book and his other works, Cameron argues that modern AI can help settle the debate. In particular, he suggests we focus on what types of psychological "domain-general faculties" underlie our own intelligence, and how different kinds of deep learning models are revealing how those faculties may be implemented in our brains. The hope is that by building systems that possess the right handful of faculties, and putting those systems together in a way they can cooperate in a general and flexible manner, it will result in cognitive architectures we would call intelligent. Thus, what Cameron calls The New DoGMA: Domain-General Modular Architecture. We also discuss his work on mental representation and how representations get their content - how our thoughts connect to the natural external world.&nbsp;</p>



<ul class="wp-block-list"><li><a href="http://cameronbuckner.net/professional/index.htm">Cameron's Website</a>.</li><li>Twitter:&nbsp;<a href="https://twitter.com/cameronjbuckner">@cameronjbuckner</a>.</li><li>Related papers<ul><li><a href="http://cameronbuckner.net/professional/deeplearning.pdf">Empiricism without Magic: Transformational Abstraction in Deep Convolutional Neural Networks</a>.</li><li><a href="http://cameronbuckner.net/professional/forwardlooking.pdf">A Forward-Looking Theory of Content</a>.</li></ul></li><li>Other sources Cameron mentions:<ul><li><a href="https://arxiv.org/abs/1801.05667">Innateness, AlphaZero, and Artificial Intelligence (Gary Marcus)</a>.</li><li><a href="http://causality.cs.ucla.edu/blog/index.php/2020/07/26/radical-empiricism-and-machine-learning-research/">Radical Empiricism and Machine Learning Research (Judea Pearl)</a>.</li><li><a href="https://link.springer.com/article/10.1007/s11229-021-03028-4">Fodor’s guide to the Humean mind (Tamás Demeter)</a>.</li></ul></li></ul>



<p>0:00 - Intro
4:55 - Interpreting old philosophy
8:26 - AI and philosophy
17:00 - Empiricism vs. rationalism
27:09 - Domain-general faculties
33:10 - Faculty psychology
40:28 - New faculties?
46:11 - Human faculties
51:15 - Cognitive architectures
56:26 - Language
1:01:40 - Beyond dichotomous thinking
1:04:08 - Lower-level faculties
1:10:16 - Animal cognition
1:14:31 - A Forward-Looking Theory of Content</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1977/142.mp3" length="99436595" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Check out my free video series about what's missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.











Cameron Buckner is a philosopher and cognitive scientist at The University of Houston. He is writing a book about the age-old philosophical debate on how much of our knowledge is innate (nature, rationalism) versus how much is learned (nurture, empiricism). In the book and his other works, Cameron argues that modern AI can help settle the debate. In particular, he suggests we focus on what types of psychological "domain-general faculties" underlie our own intelligence, and how different kinds of deep learning models are revealing how those faculties may be implemented in our brains. The hope is that by building systems that possess the right handful of faculties, and putting those systems together in a way they can cooperate in a general and flexible manner, it will result in cognitive architectures we would call intelligent. Thus, what Cameron calls The New DoGMA: Domain-General Modular Architecture. We also discuss his work on mental representation and how representations get their content - how our thoughts connect to the natural external world.&nbsp;



Cameron's Website.Twitter:&nbsp;@cameronjbuckner.Related papersEmpiricism without Magic: Transformational Abstraction in Deep Convolutional Neural Networks.A Forward-Looking Theory of Content.Other sources Cameron mentions:Innateness, AlphaZero, and Artificial Intelligence (Gary Marcus).Radical Empiricism and Machine Learning Research (Judea Pearl).Fodor’s guide to the Humean mind (Tamás Demeter).



0:00 - Intro
4:55 - Interpreting old philosophy
8:26 - AI and philosophy
17:00 - Empiricism vs. rationalism
27:09 - Domain-general faculties
33:10 - Faculty psychology
40:28 - New faculties?
46:11 - Human faculties
51:15 - Cognitive architectures
56:26 - Language
1:01:40 - Beyond dichotomous thinking
1:04:08 - Lower-level faculties
1:10:16 - Animal cognition
1:14:31 - A Forward-Looking Theory of Content]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2022/07/art-142-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:43:16</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Check out my free video series about what's missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.











Cameron Buckner is a philosopher and cognitive scientist at The University of Houston. He is writing a book about the age-old philosophical debate on how much of our knowledge is innate (nature, rationalism) versus how much is learned (nurture, empiricism). In the book and his other works, Cameron argues that modern AI can help settle the debate. In particular, he suggests we focus on what types of psychological "domain-general faculties" underlie our own intelligence, and how different kinds of deep learning models are revealing how those faculties may be implemented in our brains. The hope is that by building systems that possess the right handful of faculties, and putting those systems together in a way they can cooperate in a general and flexible manner, it will result in cognitive architectures we would c]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2022/07/art-142-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 141 Carina Curto: From Structure to Dynamics</title>
	<link>https://braininspired.co/podcast/141/</link>
	<pubDate>Tue, 12 Jul 2022 19:42:41 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1968</guid>
	<description><![CDATA[<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Carina Curto is a professor in the Department of Mathematics at The Pennsylvania State University. She uses her background skills in mathematical physics/string theory to study networks of neurons. On this episode, we discuss the world of topology in neuroscience - the study of the geometrical structures mapped out by active populations of neurons. We also discuss her work on "combinatorial linear threshold networks" (CLTNs). Unlike the large deep learning models popular today as models of brain activity, the CLTNs Carina builds are relatively simple, abstracted graphical models. This property is important to Carina, whose goal is to develop mathematically tractable neural network models. Carina has worked out how the structure of many CLTNs allows prediction of the model's allowable dynamics, how motifs of model structure can be embedded in larger models while retaining their dynamical features, and more. The hope is that these elegant models can tell us more about the principles our messy brains employ to generate the robust and beautiful dynamics underlying our cognition.</p>



<ul class="wp-block-list"><li><a href="https://www.personal.psu.edu/cpc16/">Carina's website</a>.</li><li><a href="https://sites.psu.edu/mathneurolab/">The Mathematical Neuroscience Lab</a>.</li><li>Related papers<ul><li><a href="https://www.personal.psu.edu/cpc16/Curto-whitepaper-2013.pdf">A major obstacle impeding progress in brain science is the lack of beautiful models.</a></li><li><a href="https://arxiv.org/abs/1605.01905">What can topology tells us about the neural code?</a></li><li><a href="https://arxiv.org/abs/1804.01487">Predicting neural network dynamics via graphical analysis</a></li></ul></li></ul>



<p>0:00 - Intro
4:25 - Background: Physics and math to study brains
20:45 - Beautiful and ugly models
35:40 - Topology
43:14 - Topology in hippocampal navigation
56:04 - Topology vs. dynamical systems theory
59:10 - Combinatorial linear threshold networks
1:25:26 - How much more math do we need to invent?</p>]]></description>
	<itunes:subtitle><![CDATA[Check out my free video series about whats missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.









Carina Curto is a professor in the Department of Mathematics at The Pennsylvani]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Carina Curto is a professor in the Department of Mathematics at The Pennsylvania State University. She uses her background skills in mathematical physics/string theory to study networks of neurons. On this episode, we discuss the world of topology in neuroscience - the study of the geometrical structures mapped out by active populations of neurons. We also discuss her work on "combinatorial linear threshold networks" (CLTNs). Unlike the large deep learning models popular today as models of brain activity, the CLTNs Carina builds are relatively simple, abstracted graphical models. This property is important to Carina, whose goal is to develop mathematically tractable neural network models. Carina has worked out how the structure of many CLTNs allows prediction of the model's allowable dynamics, how motifs of model structure can be embedded in larger models while retaining their dynamical features, and more. The hope is that these elegant models can tell us more about the principles our messy brains employ to generate the robust and beautiful dynamics underlying our cognition.</p>



<ul class="wp-block-list"><li><a href="https://www.personal.psu.edu/cpc16/">Carina's website</a>.</li><li><a href="https://sites.psu.edu/mathneurolab/">The Mathematical Neuroscience Lab</a>.</li><li>Related papers<ul><li><a href="https://www.personal.psu.edu/cpc16/Curto-whitepaper-2013.pdf">A major obstacle impeding progress in brain science is the lack of beautiful models.</a></li><li><a href="https://arxiv.org/abs/1605.01905">What can topology tells us about the neural code?</a></li><li><a href="https://arxiv.org/abs/1804.01487">Predicting neural network dynamics via graphical analysis</a></li></ul></li></ul>



<p>0:00 - Intro
4:25 - Background: Physics and math to study brains
20:45 - Beautiful and ugly models
35:40 - Topology
43:14 - Topology in hippocampal navigation
56:04 - Topology vs. dynamical systems theory
59:10 - Combinatorial linear threshold networks
1:25:26 - How much more math do we need to invent?</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1968/141.mp3" length="88309733" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Check out my free video series about what's missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.









Carina Curto is a professor in the Department of Mathematics at The Pennsylvania State University. She uses her background skills in mathematical physics/string theory to study networks of neurons. On this episode, we discuss the world of topology in neuroscience - the study of the geometrical structures mapped out by active populations of neurons. We also discuss her work on "combinatorial linear threshold networks" (CLTNs). Unlike the large deep learning models popular today as models of brain activity, the CLTNs Carina builds are relatively simple, abstracted graphical models. This property is important to Carina, whose goal is to develop mathematically tractable neural network models. Carina has worked out how the structure of many CLTNs allows prediction of the model's allowable dynamics, how motifs of model structure can be embedded in larger models while retaining their dynamical features, and more. The hope is that these elegant models can tell us more about the principles our messy brains employ to generate the robust and beautiful dynamics underlying our cognition.



Carina's website.The Mathematical Neuroscience Lab.Related papersA major obstacle impeding progress in brain science is the lack of beautiful models.What can topology tells us about the neural code?Predicting neural network dynamics via graphical analysis



0:00 - Intro
4:25 - Background: Physics and math to study brains
20:45 - Beautiful and ugly models
35:40 - Topology
43:14 - Topology in hippocampal navigation
56:04 - Topology vs. dynamical systems theory
59:10 - Combinatorial linear threshold networks
1:25:26 - How much more math do we need to invent?]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2022/07/art-141-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:31:40</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Check out my free video series about what's missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.









Carina Curto is a professor in the Department of Mathematics at The Pennsylvania State University. She uses her background skills in mathematical physics/string theory to study networks of neurons. On this episode, we discuss the world of topology in neuroscience - the study of the geometrical structures mapped out by active populations of neurons. We also discuss her work on "combinatorial linear threshold networks" (CLTNs). Unlike the large deep learning models popular today as models of brain activity, the CLTNs Carina builds are relatively simple, abstracted graphical models. This property is important to Carina, whose goal is to develop mathematically tractable neural network models. Carina has worked out how the structure of many CLTNs allows prediction of the model's allowable dynamics, how motifs of mode]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2022/07/art-141-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 140 Jeff Schall: Decisions and Eye Movements</title>
	<link>https://braininspired.co/podcast/140/</link>
	<pubDate>Thu, 30 Jun 2022 22:37:51 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1886</guid>
	<description><![CDATA[<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Jeff Schall is the director of the Center for Visual Neurophysiology at York University, where he runs the <a href="http://www.psy.vanderbilt.edu/faculty/schall/">Schall Lab</a>. His research centers around studying the mechanisms of our decisions, choices, movement control, and attention within the saccadic eye movement brain systems and in mathematical psychology models- in other words, how we decide where and when to look. Jeff was my postdoctoral advisor at Vanderbilt University, and I wanted to revisit a few guiding principles he instills in all his students. <a href="https://pubmed.ncbi.nlm.nih.gov/6395480/">Linking Propositions</a> by Davida Teller are a series of logical statements to ensure we rigorously connect the brain activity we record to the psychological functions we want to explain. <a href="https://bio.research.ucsc.edu/~barrylab/classes/bio183w/PlattSci1964_Strong_Inference.pdf">Strong Inference</a> by John Platt is the scientific method on steroids - a way to make our scientific practice most productive and efficient. We discuss both of these topics in the context of Jeff's eye movement and decision-making science. We also discuss how neurophysiology has changed over the past 30 years, we compare the relatively small models he employs with the huge deep learning models, many of his current projects, and plenty more. If you want to learn more about Jeff's work and approach, I recommend reading in order two of his review papers we discuss as well. One was written 20 years ago (<a href="http://www.psy.vanderbilt.edu/courses/hon182/Schall_Ann_Rev_Psych_2004.pdf">On Building a Bridge Between Brain and Behavior</a>), and the other 2-ish years ago (<a href="http://www.psy.vanderbilt.edu/faculty/schall/pdfs/Schall_TINS_2019.pdf">Accumulators, Neurons, and Response Time</a>).</p>



<ul class="wp-block-list"><li><a href="http://www.psy.vanderbilt.edu/faculty/schall/">Schall Lab</a>.</li><li>Twitter:&nbsp;<a href="https://twitter.com/LabSchall">@LabSchall</a>.</li><li>Related papers<ul><li><a href="https://pubmed.ncbi.nlm.nih.gov/6395480/">Linking Propositions</a>.</li><li><a href="https://bio.research.ucsc.edu/~barrylab/classes/bio183w/PlattSci1964_Strong_Inference.pdf">Strong Inference</a>.</li><li><a href="http://www.psy.vanderbilt.edu/courses/hon182/Schall_Ann_Rev_Psych_2004.pdf">On Building a Bridge Between Brain and Behavior</a>.</li><li><a href="http://www.psy.vanderbilt.edu/faculty/schall/pdfs/Schall_TINS_2019.pdf">Accumulators, Neurons, and Response Time</a>.</li></ul></li></ul>



<p>0:00 - Intro
6:51 - Neurophysiology old and new
14:50 - Linking propositions
24:18 - Psychology working with neurophysiology
35:40 - Neuron doctrine, population doctrine
40:28 - Strong Inference and deep learning
46:37 - Model mimicry
51:56 - Scientific fads
57:07 - Current projects
1:06:38 - On leaving academia
1:13:51 - How academia has changed for better and worse</p>]]></description>
	<itunes:subtitle><![CDATA[Check out my free video series about whats missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.









Jeff Schall is the director of the Center for Visual Neurophysiology at York Un]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Jeff Schall is the director of the Center for Visual Neurophysiology at York University, where he runs the <a href="http://www.psy.vanderbilt.edu/faculty/schall/">Schall Lab</a>. His research centers around studying the mechanisms of our decisions, choices, movement control, and attention within the saccadic eye movement brain systems and in mathematical psychology models- in other words, how we decide where and when to look. Jeff was my postdoctoral advisor at Vanderbilt University, and I wanted to revisit a few guiding principles he instills in all his students. <a href="https://pubmed.ncbi.nlm.nih.gov/6395480/">Linking Propositions</a> by Davida Teller are a series of logical statements to ensure we rigorously connect the brain activity we record to the psychological functions we want to explain. <a href="https://bio.research.ucsc.edu/~barrylab/classes/bio183w/PlattSci1964_Strong_Inference.pdf">Strong Inference</a> by John Platt is the scientific method on steroids - a way to make our scientific practice most productive and efficient. We discuss both of these topics in the context of Jeff's eye movement and decision-making science. We also discuss how neurophysiology has changed over the past 30 years, we compare the relatively small models he employs with the huge deep learning models, many of his current projects, and plenty more. If you want to learn more about Jeff's work and approach, I recommend reading in order two of his review papers we discuss as well. One was written 20 years ago (<a href="http://www.psy.vanderbilt.edu/courses/hon182/Schall_Ann_Rev_Psych_2004.pdf">On Building a Bridge Between Brain and Behavior</a>), and the other 2-ish years ago (<a href="http://www.psy.vanderbilt.edu/faculty/schall/pdfs/Schall_TINS_2019.pdf">Accumulators, Neurons, and Response Time</a>).</p>



<ul class="wp-block-list"><li><a href="http://www.psy.vanderbilt.edu/faculty/schall/">Schall Lab</a>.</li><li>Twitter:&nbsp;<a href="https://twitter.com/LabSchall">@LabSchall</a>.</li><li>Related papers<ul><li><a href="https://pubmed.ncbi.nlm.nih.gov/6395480/">Linking Propositions</a>.</li><li><a href="https://bio.research.ucsc.edu/~barrylab/classes/bio183w/PlattSci1964_Strong_Inference.pdf">Strong Inference</a>.</li><li><a href="http://www.psy.vanderbilt.edu/courses/hon182/Schall_Ann_Rev_Psych_2004.pdf">On Building a Bridge Between Brain and Behavior</a>.</li><li><a href="http://www.psy.vanderbilt.edu/faculty/schall/pdfs/Schall_TINS_2019.pdf">Accumulators, Neurons, and Response Time</a>.</li></ul></li></ul>



<p>0:00 - Intro
6:51 - Neurophysiology old and new
14:50 - Linking propositions
24:18 - Psychology working with neurophysiology
35:40 - Neuron doctrine, population doctrine
40:28 - Strong Inference and deep learning
46:37 - Model mimicry
51:56 - Scientific fads
57:07 - Current projects
1:06:38 - On leaving academia
1:13:51 - How academia has changed for better and worse</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1886/140.mp3" length="77447615" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Check out my free video series about what's missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.









Jeff Schall is the director of the Center for Visual Neurophysiology at York University, where he runs the Schall Lab. His research centers around studying the mechanisms of our decisions, choices, movement control, and attention within the saccadic eye movement brain systems and in mathematical psychology models- in other words, how we decide where and when to look. Jeff was my postdoctoral advisor at Vanderbilt University, and I wanted to revisit a few guiding principles he instills in all his students. Linking Propositions by Davida Teller are a series of logical statements to ensure we rigorously connect the brain activity we record to the psychological functions we want to explain. Strong Inference by John Platt is the scientific method on steroids - a way to make our scientific practice most productive and efficient. We discuss both of these topics in the context of Jeff's eye movement and decision-making science. We also discuss how neurophysiology has changed over the past 30 years, we compare the relatively small models he employs with the huge deep learning models, many of his current projects, and plenty more. If you want to learn more about Jeff's work and approach, I recommend reading in order two of his review papers we discuss as well. One was written 20 years ago (On Building a Bridge Between Brain and Behavior), and the other 2-ish years ago (Accumulators, Neurons, and Response Time).



Schall Lab.Twitter:&nbsp;@LabSchall.Related papersLinking Propositions.Strong Inference.On Building a Bridge Between Brain and Behavior.Accumulators, Neurons, and Response Time.



0:00 - Intro
6:51 - Neurophysiology old and new
14:50 - Linking propositions
24:18 - Psychology working with neurophysiology
35:40 - Neuron doctrine, population doctrine
40:28 - Strong Inference and deep learning
46:37 - Model mimicry
51:56 - Scientific fads
57:07 - Current projects
1:06:38 - On leaving academia
1:13:51 - How academia has changed for better and worse]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2022/06/art-140-01-1.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:20:22</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Check out my free video series about what's missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.









Jeff Schall is the director of the Center for Visual Neurophysiology at York University, where he runs the Schall Lab. His research centers around studying the mechanisms of our decisions, choices, movement control, and attention within the saccadic eye movement brain systems and in mathematical psychology models- in other words, how we decide where and when to look. Jeff was my postdoctoral advisor at Vanderbilt University, and I wanted to revisit a few guiding principles he instills in all his students. Linking Propositions by Davida Teller are a series of logical statements to ensure we rigorously connect the brain activity we record to the psychological functions we want to explain. Strong Inference by John Platt is the scientific method on steroids - a way to make our scientific practice most productive and]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2022/06/art-140-01-1.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 139 Marc Howard: Compressed Time and Memory</title>
	<link>https://braininspired.co/podcast/139/</link>
	<pubDate>Mon, 20 Jun 2022 16:49:31 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1858</guid>
	<description><![CDATA[<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p>Marc Howard runs his <a href="https://sites.bu.edu/tcn/">Theoretical Cognitive Neuroscience Lab</a> at Boston University, where he develops mathematical models of cognition, constrained by psychological and neural data. In this episode, we discuss the idea that a <a href="https://en.wikipedia.org/wiki/Laplace_transform">Laplace transform</a> and its inverse may serve as a unified framework for memory. In short, our memories are compressed on a continuous log-scale: as memories get older, their representations "spread out" in time. It turns out this kind of representation seems ubiquitous in the brain and across cognitive functions, suggesting it is likely a canonical computation our brains use to represent a wide variety of cognitive functions. We also discuss some of the ways Marc is incorporating this mathematical operation in deep learning nets to improve their ability to handle information at different time scales.</p>



<ul class="wp-block-list">
<li><a href="https://sites.bu.edu/tcn/">Theoretical Cognitive Neuroscience Lab</a>.&nbsp;</li>



<li>Twitter: <a href="https://twitter.com/marcwhoward777">@marcwhoward777</a>.</li>



<li>Related papers:
<ul class="wp-block-list">
<li><a href="https://sites.bu.edu/tcn/files/2017/06/TiCSVision.pdf">Memory as perception of the past: Compressed time in mind and brain.</a></li>



<li><a href="http://arxiv.org/abs/2201.01796">Formal models of memory based on temporally-varying representations.</a></li>



<li><a href="http://arxiv.org/abs/2003.11668">Cognitive computation using neural representations of time and space in the Laplace domain.</a></li>



<li><a href="https://www.youtube.com/watch?v=DRXcK0iTPUc&amp;t=731s">Time as a continuous dimension in natural and artificial networks</a>.</li>



<li><a href="https://arxiv.org/abs/2104.04646">DeepSITH: Efficient learning via decomposition of what and when across time scales.</a></li>
</ul>
</li>
</ul>



<p>0:00 - Intro
4:57 - Main idea: Laplace transforms
12:00 - Time cells
20:08 - Laplace, compression, and time cells
25:34 - Everywhere in the brain
29:28 - Episodic memory
35:11 - Randy Gallistel's memory idea
40:37 - Adding Laplace to deep nets
48:04 - Reinforcement learning
1:00:52 - Brad Wyble Q: What gets filtered out?
1:05:38 - Replay and complementary learning systems
1:11:52 - Howard Goldowsky Q: Gyorgy Buzsaki
1:15:10 - Obstacles</p>]]></description>
	<itunes:subtitle><![CDATA[Check out my free video series about whats missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.







Marc Howard runs his Theoretical Cognitive Neuroscience Lab at Boston University,]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p>Marc Howard runs his <a href="https://sites.bu.edu/tcn/">Theoretical Cognitive Neuroscience Lab</a> at Boston University, where he develops mathematical models of cognition, constrained by psychological and neural data. In this episode, we discuss the idea that a <a href="https://en.wikipedia.org/wiki/Laplace_transform">Laplace transform</a> and its inverse may serve as a unified framework for memory. In short, our memories are compressed on a continuous log-scale: as memories get older, their representations "spread out" in time. It turns out this kind of representation seems ubiquitous in the brain and across cognitive functions, suggesting it is likely a canonical computation our brains use to represent a wide variety of cognitive functions. We also discuss some of the ways Marc is incorporating this mathematical operation in deep learning nets to improve their ability to handle information at different time scales.</p>



<ul class="wp-block-list">
<li><a href="https://sites.bu.edu/tcn/">Theoretical Cognitive Neuroscience Lab</a>.&nbsp;</li>



<li>Twitter: <a href="https://twitter.com/marcwhoward777">@marcwhoward777</a>.</li>



<li>Related papers:
<ul class="wp-block-list">
<li><a href="https://sites.bu.edu/tcn/files/2017/06/TiCSVision.pdf">Memory as perception of the past: Compressed time in mind and brain.</a></li>



<li><a href="http://arxiv.org/abs/2201.01796">Formal models of memory based on temporally-varying representations.</a></li>



<li><a href="http://arxiv.org/abs/2003.11668">Cognitive computation using neural representations of time and space in the Laplace domain.</a></li>



<li><a href="https://www.youtube.com/watch?v=DRXcK0iTPUc&amp;t=731s">Time as a continuous dimension in natural and artificial networks</a>.</li>



<li><a href="https://arxiv.org/abs/2104.04646">DeepSITH: Efficient learning via decomposition of what and when across time scales.</a></li>
</ul>
</li>
</ul>



<p>0:00 - Intro
4:57 - Main idea: Laplace transforms
12:00 - Time cells
20:08 - Laplace, compression, and time cells
25:34 - Everywhere in the brain
29:28 - Episodic memory
35:11 - Randy Gallistel's memory idea
40:37 - Adding Laplace to deep nets
48:04 - Reinforcement learning
1:00:52 - Brad Wyble Q: What gets filtered out?
1:05:38 - Replay and complementary learning systems
1:11:52 - Howard Goldowsky Q: Gyorgy Buzsaki
1:15:10 - Obstacles</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1858/139.mp3" length="77276449" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Check out my free video series about what's missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.







Marc Howard runs his Theoretical Cognitive Neuroscience Lab at Boston University, where he develops mathematical models of cognition, constrained by psychological and neural data. In this episode, we discuss the idea that a Laplace transform and its inverse may serve as a unified framework for memory. In short, our memories are compressed on a continuous log-scale: as memories get older, their representations "spread out" in time. It turns out this kind of representation seems ubiquitous in the brain and across cognitive functions, suggesting it is likely a canonical computation our brains use to represent a wide variety of cognitive functions. We also discuss some of the ways Marc is incorporating this mathematical operation in deep learning nets to improve their ability to handle information at different time scales.




Theoretical Cognitive Neuroscience Lab.&nbsp;



Twitter: @marcwhoward777.



Related papers:

Memory as perception of the past: Compressed time in mind and brain.



Formal models of memory based on temporally-varying representations.



Cognitive computation using neural representations of time and space in the Laplace domain.



Time as a continuous dimension in natural and artificial networks.



DeepSITH: Efficient learning via decomposition of what and when across time scales.






0:00 - Intro
4:57 - Main idea: Laplace transforms
12:00 - Time cells
20:08 - Laplace, compression, and time cells
25:34 - Everywhere in the brain
29:28 - Episodic memory
35:11 - Randy Gallistel's memory idea
40:37 - Adding Laplace to deep nets
48:04 - Reinforcement learning
1:00:52 - Brad Wyble Q: What gets filtered out?
1:05:38 - Replay and complementary learning systems
1:11:52 - Howard Goldowsky Q: Gyorgy Buzsaki
1:15:10 - Obstacles]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2022/06/art-139-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:20:11</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Check out my free video series about what's missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.







Marc Howard runs his Theoretical Cognitive Neuroscience Lab at Boston University, where he develops mathematical models of cognition, constrained by psychological and neural data. In this episode, we discuss the idea that a Laplace transform and its inverse may serve as a unified framework for memory. In short, our memories are compressed on a continuous log-scale: as memories get older, their representations "spread out" in time. It turns out this kind of representation seems ubiquitous in the brain and across cognitive functions, suggesting it is likely a canonical computation our brains use to represent a wide variety of cognitive functions. We also discuss some of the ways Marc is incorporating this mathematical operation in deep learning nets to improve their ability to handle information at different time sc]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2022/06/art-139-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 138 Matthew Larkum: The Dendrite Hypothesis</title>
	<link>https://braininspired.co/podcast/138/</link>
	<pubDate>Mon, 06 Jun 2022 14:58:39 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1853</guid>
	<description><![CDATA[<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Matthew Larkum runs his lab at Humboldt University of Berlin, where his group studies how dendrites contribute to&nbsp; computations within and across layers of the neocortex. Since the late 1990s, Matthew has continued to uncover key properties of the way pyramidal neurons stretch across layers of the cortex, their dendrites receiving inputs from those different layers - and thus different brain areas. For example, layer 5 pyramidal neurons have a set of basal dendrites near the cell body that receives feedforward-like input, and a set of apical dendrites all the way up in layer 1 that receives feedback--like input. Depending on which set of dendrites is receiving input, or neither or both, the neuron's output functions in different modes- silent, regular spiking, or burst spiking. Matthew realized the different sets of dendritic inputs could signal different operations, often pairing feedforward sensory--like signals and feedback context-like signals. His research has shown this kind of coincidence detection is important for cognitive functions like perception, memory, learning, and even wakefulness. We discuss many of his ideas and research findings, why dendrites have long been neglected in favor of neuron cell bodies, the possibility of learning about computations by studying implementation-level phenomena, and much more.</p>



<ul class="wp-block-list"><li><a href="https://www.projekte.hu-berlin.de/en/larkum">Larkum Lab</a>.</li><li>Twitter: <a href="https://twitter.com/mattlark">@mattlark</a>.</li><li>Related papers<ul><li><a href="https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(20)30175-3?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS1364661320301753%3Fshowall%3Dtrue">Cellular Mechanisms of Conscious Processing</a>.</li><li><a href="https://pubmed.ncbi.nlm.nih.gov/33335033/">Perirhinal input to neocortical layer 1 controls learning</a>. (bioRxiv link: <a href="https://www.biorxiv.org/content/10.1101/713883v1">https://www.biorxiv.org/content/10.1101/713883v1</a>)</li><li><a href="https://doi.org/10.1016/j.neuroscience.2022.03.008">Are dendrites conceptually useful?</a></li><li><a href="https://doi.org/10.1126/science.abk1859">Memories off the top of your head</a>.</li><li><a href="https://www.researchgate.net/publication/359861041_Do_action_potentials_cause_consciousness">Do Action Potentials Cause Consciousness?</a></li></ul></li><li><a href="https://braininspired.co/podcast/9/">Blake Richard's episode</a> discussing back-propagation in the brain (based on Matthew's experiments)</li></ul>



<p>0:00 - Intro
5:31 - Background: Dendrites
23:20 - Cortical neuron bodies vs. branches
25:47 - Theories of cortex
30:49 - Feedforward and feedback hierarchy
37:40 - Dendritic integration hypothesis
44:32 - DIT vs. other consciousness theories
51:30 - Mac Shine Q1
1:04:38 - Are dendrites conceptually useful?
1:09:15 - Insights from implementation level
1:24:44 - How detailed to model?
1:28:15 - Do action potentials cause consciousness?
1:40:33 - Mac Shine Q2</p>]]></description>
	<itunes:subtitle><![CDATA[Check out my free video series about whats missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.









Matthew Larkum runs his lab at Humboldt University of Berlin, where his group s]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Matthew Larkum runs his lab at Humboldt University of Berlin, where his group studies how dendrites contribute to&nbsp; computations within and across layers of the neocortex. Since the late 1990s, Matthew has continued to uncover key properties of the way pyramidal neurons stretch across layers of the cortex, their dendrites receiving inputs from those different layers - and thus different brain areas. For example, layer 5 pyramidal neurons have a set of basal dendrites near the cell body that receives feedforward-like input, and a set of apical dendrites all the way up in layer 1 that receives feedback--like input. Depending on which set of dendrites is receiving input, or neither or both, the neuron's output functions in different modes- silent, regular spiking, or burst spiking. Matthew realized the different sets of dendritic inputs could signal different operations, often pairing feedforward sensory--like signals and feedback context-like signals. His research has shown this kind of coincidence detection is important for cognitive functions like perception, memory, learning, and even wakefulness. We discuss many of his ideas and research findings, why dendrites have long been neglected in favor of neuron cell bodies, the possibility of learning about computations by studying implementation-level phenomena, and much more.</p>



<ul class="wp-block-list"><li><a href="https://www.projekte.hu-berlin.de/en/larkum">Larkum Lab</a>.</li><li>Twitter: <a href="https://twitter.com/mattlark">@mattlark</a>.</li><li>Related papers<ul><li><a href="https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(20)30175-3?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS1364661320301753%3Fshowall%3Dtrue">Cellular Mechanisms of Conscious Processing</a>.</li><li><a href="https://pubmed.ncbi.nlm.nih.gov/33335033/">Perirhinal input to neocortical layer 1 controls learning</a>. (bioRxiv link: <a href="https://www.biorxiv.org/content/10.1101/713883v1">https://www.biorxiv.org/content/10.1101/713883v1</a>)</li><li><a href="https://doi.org/10.1016/j.neuroscience.2022.03.008">Are dendrites conceptually useful?</a></li><li><a href="https://doi.org/10.1126/science.abk1859">Memories off the top of your head</a>.</li><li><a href="https://www.researchgate.net/publication/359861041_Do_action_potentials_cause_consciousness">Do Action Potentials Cause Consciousness?</a></li></ul></li><li><a href="https://braininspired.co/podcast/9/">Blake Richard's episode</a> discussing back-propagation in the brain (based on Matthew's experiments)</li></ul>



<p>0:00 - Intro
5:31 - Background: Dendrites
23:20 - Cortical neuron bodies vs. branches
25:47 - Theories of cortex
30:49 - Feedforward and feedback hierarchy
37:40 - Dendritic integration hypothesis
44:32 - DIT vs. other consciousness theories
51:30 - Mac Shine Q1
1:04:38 - Are dendrites conceptually useful?
1:09:15 - Insights from implementation level
1:24:44 - How detailed to model?
1:28:15 - Do action potentials cause consciousness?
1:40:33 - Mac Shine Q2</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1853/138.mp3" length="107538486" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Check out my free video series about what's missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.









Matthew Larkum runs his lab at Humboldt University of Berlin, where his group studies how dendrites contribute to&nbsp; computations within and across layers of the neocortex. Since the late 1990s, Matthew has continued to uncover key properties of the way pyramidal neurons stretch across layers of the cortex, their dendrites receiving inputs from those different layers - and thus different brain areas. For example, layer 5 pyramidal neurons have a set of basal dendrites near the cell body that receives feedforward-like input, and a set of apical dendrites all the way up in layer 1 that receives feedback--like input. Depending on which set of dendrites is receiving input, or neither or both, the neuron's output functions in different modes- silent, regular spiking, or burst spiking. Matthew realized the different sets of dendritic inputs could signal different operations, often pairing feedforward sensory--like signals and feedback context-like signals. His research has shown this kind of coincidence detection is important for cognitive functions like perception, memory, learning, and even wakefulness. We discuss many of his ideas and research findings, why dendrites have long been neglected in favor of neuron cell bodies, the possibility of learning about computations by studying implementation-level phenomena, and much more.



Larkum Lab.Twitter: @mattlark.Related papersCellular Mechanisms of Conscious Processing.Perirhinal input to neocortical layer 1 controls learning. (bioRxiv link: https://www.biorxiv.org/content/10.1101/713883v1)Are dendrites conceptually useful?Memories off the top of your head.Do Action Potentials Cause Consciousness?Blake Richard's episode discussing back-propagation in the brain (based on Matthew's experiments)



0:00 - Intro
5:31 - Background: Dendrites
23:20 - Cortical neuron bodies vs. branches
25:47 - Theories of cortex
30:49 - Feedforward and feedback hierarchy
37:40 - Dendritic integration hypothesis
44:32 - DIT vs. other consciousness theories
51:30 - Mac Shine Q1
1:04:38 - Are dendrites conceptually useful?
1:09:15 - Insights from implementation level
1:24:44 - How detailed to model?
1:28:15 - Do action potentials cause consciousness?
1:40:33 - Mac Shine Q2]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2022/06/art-138-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:51:42</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Check out my free video series about what's missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.









Matthew Larkum runs his lab at Humboldt University of Berlin, where his group studies how dendrites contribute to&nbsp; computations within and across layers of the neocortex. Since the late 1990s, Matthew has continued to uncover key properties of the way pyramidal neurons stretch across layers of the cortex, their dendrites receiving inputs from those different layers - and thus different brain areas. For example, layer 5 pyramidal neurons have a set of basal dendrites near the cell body that receives feedforward-like input, and a set of apical dendrites all the way up in layer 1 that receives feedback--like input. Depending on which set of dendrites is receiving input, or neither or both, the neuron's output functions in different modes- silent, regular spiking, or burst spiking. Matthew realized the differen]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2022/06/art-138-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 137 Brian Butterworth: Can Fish Count?</title>
	<link>https://braininspired.co/podcast/137/</link>
	<pubDate>Fri, 27 May 2022 17:48:32 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1837</guid>
	<description><![CDATA[<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>Brian Butterworth is Emeritus Professor of Cognitive Neuropsychology at University College London. In his book, <a href="https://amzn.to/384caoB">Can Fish Count?: What Animals Reveal About Our Uniquely Mathematical Minds</a>, he describes the counting and numerical abilities across many different species, suggesting our ability to count is evolutionarily very old (since many diverse species can count). We discuss many of the examples in his book, the mathematical disability dyscalculia and its relation to dyslexia, how to test counting abilities in various species, how counting may happen in brains, the promise of creating artificial networks that can do math, and many more topics.</p>





<ul class="wp-block-list"><li>Brian's website: <a href="https://www.mathematicalbrain.com/">The Mathematical Brain</a></li><li>Twitter:&nbsp;<a href="https://twitter.com/b_butterworth">@b_butterworth</a></li><li>The book:<ul><li><a href="https://amzn.to/384caoB">Can Fish Count?: What Animals Reveal About Our Uniquely Mathematical Minds</a></li></ul></li></ul>



<p>0:00 - Intro
3:19 - Why Counting?
5:31 - Dyscalculia
12:06 - Dyslexia
19:12 - Counting
26:37 - Origins of counting vs. language
34:48 - Counting vs. higher math
46:46 - Counting some things and not others
53:33 - How to test counting
1:03:30 - How does the brain count?
1:13:10 - Are numbers real?</p>]]></description>
	<itunes:subtitle><![CDATA[Check out my free video series about whats missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.











Brian Butterworth is Emeritus Professor of Cognitive Neuropsychology at Unive]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>







<p>Brian Butterworth is Emeritus Professor of Cognitive Neuropsychology at University College London. In his book, <a href="https://amzn.to/384caoB">Can Fish Count?: What Animals Reveal About Our Uniquely Mathematical Minds</a>, he describes the counting and numerical abilities across many different species, suggesting our ability to count is evolutionarily very old (since many diverse species can count). We discuss many of the examples in his book, the mathematical disability dyscalculia and its relation to dyslexia, how to test counting abilities in various species, how counting may happen in brains, the promise of creating artificial networks that can do math, and many more topics.</p>





<ul class="wp-block-list"><li>Brian's website: <a href="https://www.mathematicalbrain.com/">The Mathematical Brain</a></li><li>Twitter:&nbsp;<a href="https://twitter.com/b_butterworth">@b_butterworth</a></li><li>The book:<ul><li><a href="https://amzn.to/384caoB">Can Fish Count?: What Animals Reveal About Our Uniquely Mathematical Minds</a></li></ul></li></ul>



<p>0:00 - Intro
3:19 - Why Counting?
5:31 - Dyscalculia
12:06 - Dyslexia
19:12 - Counting
26:37 - Origins of counting vs. language
34:48 - Counting vs. higher math
46:46 - Counting some things and not others
53:33 - How to test counting
1:03:30 - How does the brain count?
1:13:10 - Are numbers real?</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1837/137.mp3" length="75013076" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Check out my free video series about what's missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.











Brian Butterworth is Emeritus Professor of Cognitive Neuropsychology at University College London. In his book, Can Fish Count?: What Animals Reveal About Our Uniquely Mathematical Minds, he describes the counting and numerical abilities across many different species, suggesting our ability to count is evolutionarily very old (since many diverse species can count). We discuss many of the examples in his book, the mathematical disability dyscalculia and its relation to dyslexia, how to test counting abilities in various species, how counting may happen in brains, the promise of creating artificial networks that can do math, and many more topics.





Brian's website: The Mathematical BrainTwitter:&nbsp;@b_butterworthThe book:Can Fish Count?: What Animals Reveal About Our Uniquely Mathematical Minds



0:00 - Intro
3:19 - Why Counting?
5:31 - Dyscalculia
12:06 - Dyslexia
19:12 - Counting
26:37 - Origins of counting vs. language
34:48 - Counting vs. higher math
46:46 - Counting some things and not others
53:33 - How to test counting
1:03:30 - How does the brain count?
1:13:10 - Are numbers real?]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2022/05/art-137-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:17:49</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Check out my free video series about what's missing in AI and Neuroscience







Support the show to get full episodes, full archive, and join the Discord community.











Brian Butterworth is Emeritus Professor of Cognitive Neuropsychology at University College London. In his book, Can Fish Count?: What Animals Reveal About Our Uniquely Mathematical Minds, he describes the counting and numerical abilities across many different species, suggesting our ability to count is evolutionarily very old (since many diverse species can count). We discuss many of the examples in his book, the mathematical disability dyscalculia and its relation to dyslexia, how to test counting abilities in various species, how counting may happen in brains, the promise of creating artificial networks that can do math, and many more topics.





Brian's website: The Mathematical BrainTwitter:&nbsp;@b_butterworthThe book:Can Fish Count?: What Animals Reveal About Our Uniquely Mathematical Minds



0:00 - In]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2022/05/art-137-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 136 Michel Bitbol and Alex Gomez-Marin: Phenomenology</title>
	<link>https://braininspired.co/podcast/136/</link>
	<pubDate>Tue, 17 May 2022 14:54:42 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1826</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>







<p>Michel Bitbol is Director of Research at CNRS (Centre National de la Recherche Scientifique). Alex Gomez-Marin is a neuroscientist running his lab, The Behavior of Organisms Laboratory, at the Instituto de Neurociencias in Alicante. We discuss phenomenology as an alternative perspective on our scientific endeavors. Although we like to believe our science is objective and explains the reality of the world we inhabit, we can't escape the fact that all of our scientific knowledge comes through our perceptions and interpretations as conscious living beings. Michel has used phenomenology to resolve many of the paradoxes that quantum mechanics generates when it is understood as a description of reality, and more recently he has applied phenomenology to the philosophy of mind and consciousness. Alex is currently trying to apply the phenomenological approach to his research on brains and behavior. Much of our conversation revolves around how phenomenology and our "normal" scientific explorations can co-exist, including the study of minds, brains, and intelligence- our own and that of other organisms. We also discuss the "blind spot" of science, the history and practice of phenomenology, various kinds of explanation, the language we use to describe things, and more.</p>



<ul class="wp-block-list"><li><a href="http://michel.bitbol.pagesperso-orange.fr/">Michel's website</a></li><li>Alex's Lab: <a href="https://behavior-of-organisms.org/">The Behavior of Organisms Laboratory</a>.</li><li>Twitter:&nbsp;<a href="https://twitter.com/behaviOrganisms">@behaviOrganisms</a> (Alex)</li><li>Related papers<ul><li><a href="https://rosa.uniroma1.it/rosa04/organisms/article/view/16437/15864">The Blind Spot of Neuroscience</a>&nbsp;&nbsp;</li><li><a href="https://www.sciencedirect.com/science/article/pii/S0896627319307901">The Life of Behavior</a></li><li><a href="https://behavioroforganismsdotorg.files.wordpress.com/2019/11/gomezmarin2019bbs.pdf">A Clash of Umwelts</a>&nbsp;</li></ul></li><li>Related&nbsp;events:<ul><li><a href="https://paricenter.com/the-future-scientist-a-conversation-series/">The Future Scientist</a>&nbsp;(a conversation series)</li></ul></li></ul>



<p>0:00 - Intro
4:32 - The Blind Spot
15:53 - Phenomenology and interpretation
22:51 - Personal stories: appreciating phenomenology
37:42 - Quantum physics example
47:16 - Scientific explanation vs. phenomenological description
59:39 - How can phenomenology and science complement each other?
1:08:22 - Neurophenomenology
1:17:34 - Use of language
1:25:46 - Mutual constraints</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about whats missing in AI and Neuroscience











Michel Bitbol is Director of Research at CNRS (Centre National de la Recherch]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>







<p>Michel Bitbol is Director of Research at CNRS (Centre National de la Recherche Scientifique). Alex Gomez-Marin is a neuroscientist running his lab, The Behavior of Organisms Laboratory, at the Instituto de Neurociencias in Alicante. We discuss phenomenology as an alternative perspective on our scientific endeavors. Although we like to believe our science is objective and explains the reality of the world we inhabit, we can't escape the fact that all of our scientific knowledge comes through our perceptions and interpretations as conscious living beings. Michel has used phenomenology to resolve many of the paradoxes that quantum mechanics generates when it is understood as a description of reality, and more recently he has applied phenomenology to the philosophy of mind and consciousness. Alex is currently trying to apply the phenomenological approach to his research on brains and behavior. Much of our conversation revolves around how phenomenology and our "normal" scientific explorations can co-exist, including the study of minds, brains, and intelligence- our own and that of other organisms. We also discuss the "blind spot" of science, the history and practice of phenomenology, various kinds of explanation, the language we use to describe things, and more.</p>



<ul class="wp-block-list"><li><a href="http://michel.bitbol.pagesperso-orange.fr/">Michel's website</a></li><li>Alex's Lab: <a href="https://behavior-of-organisms.org/">The Behavior of Organisms Laboratory</a>.</li><li>Twitter:&nbsp;<a href="https://twitter.com/behaviOrganisms">@behaviOrganisms</a> (Alex)</li><li>Related papers<ul><li><a href="https://rosa.uniroma1.it/rosa04/organisms/article/view/16437/15864">The Blind Spot of Neuroscience</a>&nbsp;&nbsp;</li><li><a href="https://www.sciencedirect.com/science/article/pii/S0896627319307901">The Life of Behavior</a></li><li><a href="https://behavioroforganismsdotorg.files.wordpress.com/2019/11/gomezmarin2019bbs.pdf">A Clash of Umwelts</a>&nbsp;</li></ul></li><li>Related&nbsp;events:<ul><li><a href="https://paricenter.com/the-future-scientist-a-conversation-series/">The Future Scientist</a>&nbsp;(a conversation series)</li></ul></li></ul>



<p>0:00 - Intro
4:32 - The Blind Spot
15:53 - Phenomenology and interpretation
22:51 - Personal stories: appreciating phenomenology
37:42 - Quantum physics example
47:16 - Scientific explanation vs. phenomenological description
59:39 - How can phenomenology and science complement each other?
1:08:22 - Neurophenomenology
1:17:34 - Use of language
1:25:46 - Mutual constraints</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1826/136.mp3" length="90731605" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience











Michel Bitbol is Director of Research at CNRS (Centre National de la Recherche Scientifique). Alex Gomez-Marin is a neuroscientist running his lab, The Behavior of Organisms Laboratory, at the Instituto de Neurociencias in Alicante. We discuss phenomenology as an alternative perspective on our scientific endeavors. Although we like to believe our science is objective and explains the reality of the world we inhabit, we can't escape the fact that all of our scientific knowledge comes through our perceptions and interpretations as conscious living beings. Michel has used phenomenology to resolve many of the paradoxes that quantum mechanics generates when it is understood as a description of reality, and more recently he has applied phenomenology to the philosophy of mind and consciousness. Alex is currently trying to apply the phenomenological approach to his research on brains and behavior. Much of our conversation revolves around how phenomenology and our "normal" scientific explorations can co-exist, including the study of minds, brains, and intelligence- our own and that of other organisms. We also discuss the "blind spot" of science, the history and practice of phenomenology, various kinds of explanation, the language we use to describe things, and more.



Michel's websiteAlex's Lab: The Behavior of Organisms Laboratory.Twitter:&nbsp;@behaviOrganisms (Alex)Related papersThe Blind Spot of Neuroscience&nbsp;&nbsp;The Life of BehaviorA Clash of Umwelts&nbsp;Related&nbsp;events:The Future Scientist&nbsp;(a conversation series)



0:00 - Intro
4:32 - The Blind Spot
15:53 - Phenomenology and interpretation
22:51 - Personal stories: appreciating phenomenology
37:42 - Quantum physics example
47:16 - Scientific explanation vs. phenomenological description
59:39 - How can phenomenology and science complement each other?
1:08:22 - Neurophenomenology
1:17:34 - Use of language
1:25:46 - Mutual constraints]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2022/05/art-136-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:34:12</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience











Michel Bitbol is Director of Research at CNRS (Centre National de la Recherche Scientifique). Alex Gomez-Marin is a neuroscientist running his lab, The Behavior of Organisms Laboratory, at the Instituto de Neurociencias in Alicante. We discuss phenomenology as an alternative perspective on our scientific endeavors. Although we like to believe our science is objective and explains the reality of the world we inhabit, we can't escape the fact that all of our scientific knowledge comes through our perceptions and interpretations as conscious living beings. Michel has used phenomenology to resolve many of the paradoxes that quantum mechanics generates when it is understood as a description of reality, and more recently he has applied phenomenology to the philosophy of mind and consciousness. Alex is currently tryi]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2022/05/art-136-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 135 Elena Galea: The Stars of the Brain</title>
	<link>https://braininspired.co/podcast/135/</link>
	<pubDate>Fri, 06 May 2022 22:12:25 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1778</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>





<p>Brains are often conceived as consisting of neurons and "everything else." As Elena discusses, the "everything else," including glial cells and in particular astrocytes, have largely been ignored in neuroscience. That's partly because the fast action potentials of neurons have been assumed to underlie computations in the brain, and because technology only recently afforded closer scrutiny of astrocyte activity. Now that we can record calcium signaling in astrocytes, it's possible to relate how astrocyte signaling with each other and with neurons may complement the cognitive roles once thought the sole domain of neurons. Although the computational role of astrocytes remains unclear, it is clear that astrocytes interact with neurons and neural circuits in dynamic and interesting ways. We talk about the historical story of astrocytes, the emerging modern story, and Elena shares her views on the path forward to understand astrocyte function in cognition, disease, homeostasis, and - Elena's favorite current hypothesis - their integrative role in negative feedback control.</p>



<ul class="wp-block-list"><li><a href="https://www.icrea.cat/Web/ScientificStaff/elena-galea-248">Elena's website</a>.</li><li>Twitter:&nbsp;<a href="https://twitter.com/elenagalea1">@elenagalea1</a></li><li>Related papers<ul><li><a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6832773/">A roadmap to integrate astrocytes into Systems Neuroscience</a>.</li><li>Elena recommended this paper: <a href="https://pubmed.ncbi.nlm.nih.gov/34139160/">Biological feedback control—Respect the loops</a>.</li></ul></li></ul>





<p>0:00 - Intro
5:23 - The changing story of astrocytes
14:58 - Astrocyte research lags neuroscience
19:45 - Types of astrocytes
23:06 - Astrocytes vs neurons
26:08 - Computational roles of astrocytes
35:45 - Feedback control
43:37 - Energy efficiency
46:25 - Current technology
52:58 - Computational astroscience
1:10:57 - Do names for things matter</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about whats missing in AI and Neuroscience









Brains are often conceived as consisting of neurons and everything else. As Ele]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>





<p>Brains are often conceived as consisting of neurons and "everything else." As Elena discusses, the "everything else," including glial cells and in particular astrocytes, have largely been ignored in neuroscience. That's partly because the fast action potentials of neurons have been assumed to underlie computations in the brain, and because technology only recently afforded closer scrutiny of astrocyte activity. Now that we can record calcium signaling in astrocytes, it's possible to relate how astrocyte signaling with each other and with neurons may complement the cognitive roles once thought the sole domain of neurons. Although the computational role of astrocytes remains unclear, it is clear that astrocytes interact with neurons and neural circuits in dynamic and interesting ways. We talk about the historical story of astrocytes, the emerging modern story, and Elena shares her views on the path forward to understand astrocyte function in cognition, disease, homeostasis, and - Elena's favorite current hypothesis - their integrative role in negative feedback control.</p>



<ul class="wp-block-list"><li><a href="https://www.icrea.cat/Web/ScientificStaff/elena-galea-248">Elena's website</a>.</li><li>Twitter:&nbsp;<a href="https://twitter.com/elenagalea1">@elenagalea1</a></li><li>Related papers<ul><li><a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6832773/">A roadmap to integrate astrocytes into Systems Neuroscience</a>.</li><li>Elena recommended this paper: <a href="https://pubmed.ncbi.nlm.nih.gov/34139160/">Biological feedback control—Respect the loops</a>.</li></ul></li></ul>





<p>0:00 - Intro
5:23 - The changing story of astrocytes
14:58 - Astrocyte research lags neuroscience
19:45 - Types of astrocytes
23:06 - Astrocytes vs neurons
26:08 - Computational roles of astrocytes
35:45 - Feedback control
43:37 - Energy efficiency
46:25 - Current technology
52:58 - Computational astroscience
1:10:57 - Do names for things matter</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1778/135.mp3" length="74617991" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience









Brains are often conceived as consisting of neurons and "everything else." As Elena discusses, the "everything else," including glial cells and in particular astrocytes, have largely been ignored in neuroscience. That's partly because the fast action potentials of neurons have been assumed to underlie computations in the brain, and because technology only recently afforded closer scrutiny of astrocyte activity. Now that we can record calcium signaling in astrocytes, it's possible to relate how astrocyte signaling with each other and with neurons may complement the cognitive roles once thought the sole domain of neurons. Although the computational role of astrocytes remains unclear, it is clear that astrocytes interact with neurons and neural circuits in dynamic and interesting ways. We talk about the historical story of astrocytes, the emerging modern story, and Elena shares her views on the path forward to understand astrocyte function in cognition, disease, homeostasis, and - Elena's favorite current hypothesis - their integrative role in negative feedback control.



Elena's website.Twitter:&nbsp;@elenagalea1Related papersA roadmap to integrate astrocytes into Systems Neuroscience.Elena recommended this paper: Biological feedback control—Respect the loops.





0:00 - Intro
5:23 - The changing story of astrocytes
14:58 - Astrocyte research lags neuroscience
19:45 - Types of astrocytes
23:06 - Astrocytes vs neurons
26:08 - Computational roles of astrocytes
35:45 - Feedback control
43:37 - Energy efficiency
46:25 - Current technology
52:58 - Computational astroscience
1:10:57 - Do names for things matter]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2022/05/art-135-top-l-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:17:25</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience









Brains are often conceived as consisting of neurons and "everything else." As Elena discusses, the "everything else," including glial cells and in particular astrocytes, have largely been ignored in neuroscience. That's partly because the fast action potentials of neurons have been assumed to underlie computations in the brain, and because technology only recently afforded closer scrutiny of astrocyte activity. Now that we can record calcium signaling in astrocytes, it's possible to relate how astrocyte signaling with each other and with neurons may complement the cognitive roles once thought the sole domain of neurons. Although the computational role of astrocytes remains unclear, it is clear that astrocytes interact with neurons and neural circuits in dynamic and interesting ways. We talk about the historical ]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2022/05/art-135-top-l-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 134 Mandyam Srinivasan: Bee Flight and Cognition</title>
	<link>https://braininspired.co/podcast/134/</link>
	<pubDate>Wed, 27 Apr 2022 16:11:44 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1757</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>





<p>Srini is Emeritus Professor at Queensland Brain Institute in Australia. In this episode, he shares his wide range of behavioral experiments elucidating the principles of flight and navigation in insects. We discuss how bees use optic flow signals to determine their speed, distance, proximity to objects, and to gracefully land. These abilities are largely governed via control systems, balancing incoming perceptual signals with internal reference signals. We also talk about a few of the aerial robotics projects his research has inspired, many of the other cognitive skills bees can learn, the possibility of their feeling pain , and the nature of their possible subjective conscious experience.</p>



<ul class="wp-block-list"><li><a href="https://qbi.uq.edu.au/profile/613/srini-srinivasan">Srini's Website</a>.</li><li>Related papers<ul><li><a href="https://www.sciencedirect.com/science/article/abs/pii/S0006291X20317940">Vision, perception, navigation and 'cognition' in honeybees and applications to aerial robotics</a>.</li></ul></li></ul>





<p>0:00 - Intro
3:34 - Background
8:20 - Bee experiments
14:30 - Bee flight and navigation
28:05 - Landing
33:06 - Umwelt and perception
37:26 - Bee-inspired aerial robotics
49:10 - Motion camouflage
51:52 - Cognition in bees
1:03:10 - Small vs. big brains
1:06:42 - Pain in bees
1:12:50 - Subjective experience
1:15:25 - Deep learning
1:23:00 - Path forward</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about whats missing in AI and Neuroscience









Srini is Emeritus Professor at Queensland Brain Institute in Australia. In this]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>





<p>Srini is Emeritus Professor at Queensland Brain Institute in Australia. In this episode, he shares his wide range of behavioral experiments elucidating the principles of flight and navigation in insects. We discuss how bees use optic flow signals to determine their speed, distance, proximity to objects, and to gracefully land. These abilities are largely governed via control systems, balancing incoming perceptual signals with internal reference signals. We also talk about a few of the aerial robotics projects his research has inspired, many of the other cognitive skills bees can learn, the possibility of their feeling pain , and the nature of their possible subjective conscious experience.</p>



<ul class="wp-block-list"><li><a href="https://qbi.uq.edu.au/profile/613/srini-srinivasan">Srini's Website</a>.</li><li>Related papers<ul><li><a href="https://www.sciencedirect.com/science/article/abs/pii/S0006291X20317940">Vision, perception, navigation and 'cognition' in honeybees and applications to aerial robotics</a>.</li></ul></li></ul>





<p>0:00 - Intro
3:34 - Background
8:20 - Bee experiments
14:30 - Bee flight and navigation
28:05 - Landing
33:06 - Umwelt and perception
37:26 - Bee-inspired aerial robotics
49:10 - Motion camouflage
51:52 - Cognition in bees
1:03:10 - Small vs. big brains
1:06:42 - Pain in bees
1:12:50 - Subjective experience
1:15:25 - Deep learning
1:23:00 - Path forward</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1757/134.mp3" length="83140699" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience









Srini is Emeritus Professor at Queensland Brain Institute in Australia. In this episode, he shares his wide range of behavioral experiments elucidating the principles of flight and navigation in insects. We discuss how bees use optic flow signals to determine their speed, distance, proximity to objects, and to gracefully land. These abilities are largely governed via control systems, balancing incoming perceptual signals with internal reference signals. We also talk about a few of the aerial robotics projects his research has inspired, many of the other cognitive skills bees can learn, the possibility of their feeling pain , and the nature of their possible subjective conscious experience.



Srini's Website.Related papersVision, perception, navigation and 'cognition' in honeybees and applications to aerial robotics.





0:00 - Intro
3:34 - Background
8:20 - Bee experiments
14:30 - Bee flight and navigation
28:05 - Landing
33:06 - Umwelt and perception
37:26 - Bee-inspired aerial robotics
49:10 - Motion camouflage
51:52 - Cognition in bees
1:03:10 - Small vs. big brains
1:06:42 - Pain in bees
1:12:50 - Subjective experience
1:15:25 - Deep learning
1:23:00 - Path forward]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2022/04/art-134-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:26:17</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience









Srini is Emeritus Professor at Queensland Brain Institute in Australia. In this episode, he shares his wide range of behavioral experiments elucidating the principles of flight and navigation in insects. We discuss how bees use optic flow signals to determine their speed, distance, proximity to objects, and to gracefully land. These abilities are largely governed via control systems, balancing incoming perceptual signals with internal reference signals. We also talk about a few of the aerial robotics projects his research has inspired, many of the other cognitive skills bees can learn, the possibility of their feeling pain , and the nature of their possible subjective conscious experience.



Srini's Website.Related papersVision, perception, navigation and 'cognition' in honeybees and applications to aerial robo]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2022/04/art-134-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 133 Ken Paller: Lucid Dreaming, Memory, and Sleep</title>
	<link>https://braininspired.co/podcast/133/</link>
	<pubDate>Fri, 15 Apr 2022 17:06:09 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1750</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>





<p>Ken discusses the recent work in his lab that allows communication with subjects while they experience lucid dreams. This new paradigm opens many avenues to study the neuroscience and psychology of consciousness, sleep, dreams, memory, and learning, and to improve and optimize sleep for cognition. Ken and his team are developing a Lucid Dreaming App which is <a href="https://pallerlab.psych.northwestern.edu/dream.html">freely available via his lab</a>. We also discuss much of his work on memory and learning in general and specifically related to sleep, like reactivating specific memories during sleep to improve learning.</p>



<ul class="wp-block-list"><li><a href="https://pallerlab.psych.northwestern.edu/">Ken's Cognitive Neuroscience Laboratory</a>.</li><li>Twitter:&nbsp;<a href="https://twitter.com/kap101">@kap101</a>.</li><li><a href="https://pallerlab.psych.northwestern.edu/dream.html">The Lucid Dreaming App</a>.</li><li>Related papers<ul><li><a href="https://par.nsf.gov/servlets/purl/10275880">Memory and Sleep: How Sleep Cognition Can Change the Waking Mind for the Better</a>.</li><li><a href="https://www.sciencedirect.com/science/article/abs/pii/S1074742721000642">Does memory reactivation during sleep support generalization at the cost of memory specifics?</a></li><li><a href="https://www.cell.com/current-biology/pdf/S0960-9822(21)00059-2.pdf">Real-time dialogue between experimenters and dreamers during REM sleep.</a></li></ul></li></ul>





<p>0:00 - Intro
2:48 - Background and types of memory
14:44 -Consciousness and memory
23:32 - Phases and sleep and wakefulness
28:19 - Sleep, memory, and learning
33:50 - Targeted memory reactivation
48:34 - Problem solving during sleep
51:50 - 2-way communication with lucid dreamers
1:01:43 - Confounds to the paradigm
1:04:50 - Limitations and future studies
1:09:35 - Lucid dreaming app
1:13:47 - How sleep can inform AI
1:20:18 - Advice for students</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about whats missing in AI and Neuroscience









Ken discusses the recent work in his lab that allows communication with subject]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener">Check out my free video series about what's missing in AI and Neuroscience</a></p>



<a href="https://braininspired.co/open/" target="_blank" rel="noreferrer noopener"></a>





<p>Ken discusses the recent work in his lab that allows communication with subjects while they experience lucid dreams. This new paradigm opens many avenues to study the neuroscience and psychology of consciousness, sleep, dreams, memory, and learning, and to improve and optimize sleep for cognition. Ken and his team are developing a Lucid Dreaming App which is <a href="https://pallerlab.psych.northwestern.edu/dream.html">freely available via his lab</a>. We also discuss much of his work on memory and learning in general and specifically related to sleep, like reactivating specific memories during sleep to improve learning.</p>



<ul class="wp-block-list"><li><a href="https://pallerlab.psych.northwestern.edu/">Ken's Cognitive Neuroscience Laboratory</a>.</li><li>Twitter:&nbsp;<a href="https://twitter.com/kap101">@kap101</a>.</li><li><a href="https://pallerlab.psych.northwestern.edu/dream.html">The Lucid Dreaming App</a>.</li><li>Related papers<ul><li><a href="https://par.nsf.gov/servlets/purl/10275880">Memory and Sleep: How Sleep Cognition Can Change the Waking Mind for the Better</a>.</li><li><a href="https://www.sciencedirect.com/science/article/abs/pii/S1074742721000642">Does memory reactivation during sleep support generalization at the cost of memory specifics?</a></li><li><a href="https://www.cell.com/current-biology/pdf/S0960-9822(21)00059-2.pdf">Real-time dialogue between experimenters and dreamers during REM sleep.</a></li></ul></li></ul>





<p>0:00 - Intro
2:48 - Background and types of memory
14:44 -Consciousness and memory
23:32 - Phases and sleep and wakefulness
28:19 - Sleep, memory, and learning
33:50 - Targeted memory reactivation
48:34 - Problem solving during sleep
51:50 - 2-way communication with lucid dreamers
1:01:43 - Confounds to the paradigm
1:04:50 - Limitations and future studies
1:09:35 - Lucid dreaming app
1:13:47 - How sleep can inform AI
1:20:18 - Advice for students</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1750/133.mp3" length="85965436" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience









Ken discusses the recent work in his lab that allows communication with subjects while they experience lucid dreams. This new paradigm opens many avenues to study the neuroscience and psychology of consciousness, sleep, dreams, memory, and learning, and to improve and optimize sleep for cognition. Ken and his team are developing a Lucid Dreaming App which is freely available via his lab. We also discuss much of his work on memory and learning in general and specifically related to sleep, like reactivating specific memories during sleep to improve learning.



Ken's Cognitive Neuroscience Laboratory.Twitter:&nbsp;@kap101.The Lucid Dreaming App.Related papersMemory and Sleep: How Sleep Cognition Can Change the Waking Mind for the Better.Does memory reactivation during sleep support generalization at the cost of memory specifics?Real-time dialogue between experimenters and dreamers during REM sleep.





0:00 - Intro
2:48 - Background and types of memory
14:44 -Consciousness and memory
23:32 - Phases and sleep and wakefulness
28:19 - Sleep, memory, and learning
33:50 - Targeted memory reactivation
48:34 - Problem solving during sleep
51:50 - 2-way communication with lucid dreamers
1:01:43 - Confounds to the paradigm
1:04:50 - Limitations and future studies
1:09:35 - Lucid dreaming app
1:13:47 - How sleep can inform AI
1:20:18 - Advice for students]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2022/04/art-133-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:29:14</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience









Ken discusses the recent work in his lab that allows communication with subjects while they experience lucid dreams. This new paradigm opens many avenues to study the neuroscience and psychology of consciousness, sleep, dreams, memory, and learning, and to improve and optimize sleep for cognition. Ken and his team are developing a Lucid Dreaming App which is freely available via his lab. We also discuss much of his work on memory and learning in general and specifically related to sleep, like reactivating specific memories during sleep to improve learning.



Ken's Cognitive Neuroscience Laboratory.Twitter:&nbsp;@kap101.The Lucid Dreaming App.Related papersMemory and Sleep: How Sleep Cognition Can Change the Waking Mind for the Better.Does memory reactivation during sleep support generalization at the cost of me]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2022/04/art-133-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 132 Ila Fiete: A Grid Scaffold for Memory</title>
	<link>https://braininspired.co/podcast/132/</link>
	<pubDate>Sun, 03 Apr 2022 15:31:18 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1729</guid>
	<description><![CDATA[<h2 class="has-text-align-center wp-block-heading">Announcement:</h2>



<p>I'm releasing my Neuro-AI course April 10-13, after which it will be closed for some time. <a href="https://braininspired.co/bi-workshop-2/">Learn more here.</a></p>





<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Ila discusses her theoretical neuroscience work suggesting how our memories are formed within the cognitive maps we use to navigate the world and navigate our thoughts. The main idea is that grid cell networks in the entorhinal cortex internally generate a structured scaffold, which gets sent to the hippocampus. Neurons in the hippocampus, like the well-known place cells, receive that scaffolding and also receive external signals from the neocortex- signals about what's happening in the world and in our thoughts. Thus, the place cells act to "pin" what's happening in our neocortex to the scaffold, forming a memory. We also discuss her background as a physicist and her approach as a "neurophysicist", and a review she's publishing all about the many brain areas and cognitive functions being explained as attractor landscapes within a dynamical systems framework.</p>



<ul class="wp-block-list"><li><a href="https://fietelab.mit.edu/">The Fiete Lab</a>.</li><li>Related papers<ul><li><a href="https://www.biorxiv.org/content/10.1101/2021.11.20.469406v1.article-info">A structured scaffold underlies activity in the hippocampus.</a></li><li><a href="https://arxiv.org/abs/2112.03978">Attractor and integrator networks in the brain.</a></li></ul></li></ul>





<p>0:00 - Intro
3:36 - "Neurophysicist"
9:30 - Bottom-up vs. top-down
15:57 - Tool scavenging
18:21 - Cognitive maps and hippocampus
22:40 - Hopfield networks
27:56 - Internal scaffold
38:42 - Place cells
43:44 - Grid cells
54:22 - Grid cells encoding place cells
59:39 - Scaffold model: stacked hopfield networks
1:05:39 - Attractor landscapes
1:09:22 - Landscapes across scales
1:12:27 - Dimensionality of landscapes</p>]]></description>
	<itunes:subtitle><![CDATA[Announcement:



Im releasing my Neuro-AI course April 10-13, after which it will be closed for some time. Learn more here.





Support the show to get full episodes, full archive, and join the Discord community.









Ila discusses her theoretical n]]></itunes:subtitle>
	<content:encoded><![CDATA[<h2 class="has-text-align-center wp-block-heading">Announcement:</h2>



<p>I'm releasing my Neuro-AI course April 10-13, after which it will be closed for some time. <a href="https://braininspired.co/bi-workshop-2/">Learn more here.</a></p>





<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Ila discusses her theoretical neuroscience work suggesting how our memories are formed within the cognitive maps we use to navigate the world and navigate our thoughts. The main idea is that grid cell networks in the entorhinal cortex internally generate a structured scaffold, which gets sent to the hippocampus. Neurons in the hippocampus, like the well-known place cells, receive that scaffolding and also receive external signals from the neocortex- signals about what's happening in the world and in our thoughts. Thus, the place cells act to "pin" what's happening in our neocortex to the scaffold, forming a memory. We also discuss her background as a physicist and her approach as a "neurophysicist", and a review she's publishing all about the many brain areas and cognitive functions being explained as attractor landscapes within a dynamical systems framework.</p>



<ul class="wp-block-list"><li><a href="https://fietelab.mit.edu/">The Fiete Lab</a>.</li><li>Related papers<ul><li><a href="https://www.biorxiv.org/content/10.1101/2021.11.20.469406v1.article-info">A structured scaffold underlies activity in the hippocampus.</a></li><li><a href="https://arxiv.org/abs/2112.03978">Attractor and integrator networks in the brain.</a></li></ul></li></ul>





<p>0:00 - Intro
3:36 - "Neurophysicist"
9:30 - Bottom-up vs. top-down
15:57 - Tool scavenging
18:21 - Cognitive maps and hippocampus
22:40 - Hopfield networks
27:56 - Internal scaffold
38:42 - Place cells
43:44 - Grid cells
54:22 - Grid cells encoding place cells
59:39 - Scaffold model: stacked hopfield networks
1:05:39 - Attractor landscapes
1:09:22 - Landscapes across scales
1:12:27 - Dimensionality of landscapes</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1729/132.mp3" length="74535459" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Announcement:



I'm releasing my Neuro-AI course April 10-13, after which it will be closed for some time. Learn more here.





Support the show to get full episodes, full archive, and join the Discord community.









Ila discusses her theoretical neuroscience work suggesting how our memories are formed within the cognitive maps we use to navigate the world and navigate our thoughts. The main idea is that grid cell networks in the entorhinal cortex internally generate a structured scaffold, which gets sent to the hippocampus. Neurons in the hippocampus, like the well-known place cells, receive that scaffolding and also receive external signals from the neocortex- signals about what's happening in the world and in our thoughts. Thus, the place cells act to "pin" what's happening in our neocortex to the scaffold, forming a memory. We also discuss her background as a physicist and her approach as a "neurophysicist", and a review she's publishing all about the many brain areas and cognitive functions being explained as attractor landscapes within a dynamical systems framework.



The Fiete Lab.Related papersA structured scaffold underlies activity in the hippocampus.Attractor and integrator networks in the brain.





0:00 - Intro
3:36 - "Neurophysicist"
9:30 - Bottom-up vs. top-down
15:57 - Tool scavenging
18:21 - Cognitive maps and hippocampus
22:40 - Hopfield networks
27:56 - Internal scaffold
38:42 - Place cells
43:44 - Grid cells
54:22 - Grid cells encoding place cells
59:39 - Scaffold model: stacked hopfield networks
1:05:39 - Attractor landscapes
1:09:22 - Landscapes across scales
1:12:27 - Dimensionality of landscapes]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2022/04/art-132-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:17:20</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Announcement:



I'm releasing my Neuro-AI course April 10-13, after which it will be closed for some time. Learn more here.





Support the show to get full episodes, full archive, and join the Discord community.









Ila discusses her theoretical neuroscience work suggesting how our memories are formed within the cognitive maps we use to navigate the world and navigate our thoughts. The main idea is that grid cell networks in the entorhinal cortex internally generate a structured scaffold, which gets sent to the hippocampus. Neurons in the hippocampus, like the well-known place cells, receive that scaffolding and also receive external signals from the neocortex- signals about what's happening in the world and in our thoughts. Thus, the place cells act to "pin" what's happening in our neocortex to the scaffold, forming a memory. We also discuss her background as a physicist and her approach as a "neurophysicist", and a review she's publishing all about the many brain areas and c]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2022/04/art-132-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 131 Sri Ramaswamy and Jie Mei: Neuromodulation-aware DNNs</title>
	<link>https://braininspired.co/podcast/131/</link>
	<pubDate>Sat, 26 Mar 2022 05:11:39 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1698</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Sri and Mei join me to discuss how including principles of neuromodulation in deep learning networks may improve network performance. It's an ever-present question how much detail to include in models, and we are in the early stages of learning how neuromodulators and their interactions shape biological brain function. But as we continue to learn more, Sri and Mei are interested in building "neuromodulation-aware DNNs".</p>



<ul class="wp-block-list"><li><a href="https://blogs.ncl.ac.uk/srikanthramaswamy/">Neural Circuits Laboratory</a>.</li><li>Twitter:&nbsp;Sri: <a href="https://twitter.com/srikipedia">@srikipedia</a>; Jie: <a href="https://twitter.com/neuro_Mei">@neuro_Mei</a>.</li><li>Related papers<ul><li><a href="https://www.cell.com/trends/neurosciences/pdf/S0166-2236(21)00256-3.pdf">Informing deep neural networks by multiscale principles of neuromodulatory systems</a>.</li></ul></li></ul>





<p>0:00 - Intro
3:10 - Background
9:19 - Bottom-up vs. top-down
14:42 - Levels of abstraction
22:46 - Biological neuromodulation
33:18 - Inventing neuromodulators
41:10 - How far along are we?
53:31 - Multiple realizability
1:09:40 -Modeling dendrites
1:15:24 - Across-species neuromodulation</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Sri and Mei join me to discuss how including principles of neuromodulation in deep learning networks may improve network performance. Its an ever-present questi]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Sri and Mei join me to discuss how including principles of neuromodulation in deep learning networks may improve network performance. It's an ever-present question how much detail to include in models, and we are in the early stages of learning how neuromodulators and their interactions shape biological brain function. But as we continue to learn more, Sri and Mei are interested in building "neuromodulation-aware DNNs".</p>



<ul class="wp-block-list"><li><a href="https://blogs.ncl.ac.uk/srikanthramaswamy/">Neural Circuits Laboratory</a>.</li><li>Twitter:&nbsp;Sri: <a href="https://twitter.com/srikipedia">@srikipedia</a>; Jie: <a href="https://twitter.com/neuro_Mei">@neuro_Mei</a>.</li><li>Related papers<ul><li><a href="https://www.cell.com/trends/neurosciences/pdf/S0166-2236(21)00256-3.pdf">Informing deep neural networks by multiscale principles of neuromodulatory systems</a>.</li></ul></li></ul>





<p>0:00 - Intro
3:10 - Background
9:19 - Bottom-up vs. top-down
14:42 - Levels of abstraction
22:46 - Biological neuromodulation
33:18 - Inventing neuromodulators
41:10 - How far along are we?
53:31 - Multiple realizability
1:09:40 -Modeling dendrites
1:15:24 - Across-species neuromodulation</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1698/131.mp3" length="83690092" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Sri and Mei join me to discuss how including principles of neuromodulation in deep learning networks may improve network performance. It's an ever-present question how much detail to include in models, and we are in the early stages of learning how neuromodulators and their interactions shape biological brain function. But as we continue to learn more, Sri and Mei are interested in building "neuromodulation-aware DNNs".



Neural Circuits Laboratory.Twitter:&nbsp;Sri: @srikipedia; Jie: @neuro_Mei.Related papersInforming deep neural networks by multiscale principles of neuromodulatory systems.





0:00 - Intro
3:10 - Background
9:19 - Bottom-up vs. top-down
14:42 - Levels of abstraction
22:46 - Biological neuromodulation
33:18 - Inventing neuromodulators
41:10 - How far along are we?
53:31 - Multiple realizability
1:09:40 -Modeling dendrites
1:15:24 - Across-species neuromodulation]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2022/03/art-131-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:26:52</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Sri and Mei join me to discuss how including principles of neuromodulation in deep learning networks may improve network performance. It's an ever-present question how much detail to include in models, and we are in the early stages of learning how neuromodulators and their interactions shape biological brain function. But as we continue to learn more, Sri and Mei are interested in building "neuromodulation-aware DNNs".



Neural Circuits Laboratory.Twitter:&nbsp;Sri: @srikipedia; Jie: @neuro_Mei.Related papersInforming deep neural networks by multiscale principles of neuromodulatory systems.





0:00 - Intro
3:10 - Background
9:19 - Bottom-up vs. top-down
14:42 - Levels of abstraction
22:46 - Biological neuromodulation
33:18 - Inventing neuromodulators
41:10 - How far along are we?
53:31 - Multiple realizability
1:09:40 -Modeling dendrites
1:15:24 - Across-species neuromodulation]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2022/03/art-131-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 130 Eve Marder: Modulation of Networks</title>
	<link>https://braininspired.co/podcast/130/</link>
	<pubDate>Sun, 13 Mar 2022 14:54:05 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1695</guid>
	<description><![CDATA[<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>





<p>Eve discusses many of the lessons she has learned studying a small nervous system, the crustacean stomatogastric nervous system (STG). The STG has only about 30 neurons and its connections and neurophysiology are well-understood. Yet Eve's work has shown it functions under a remarkable diversity of conditions, and does so is a remarkable variety of ways. We discuss her work on the STG specifically, and what her work implies about trying to study much larger nervous systems, like our human brains.</p>



<ul class="wp-block-list"><li><a href="http://blogs.brandeis.edu/marderlab/">The Marder Lab</a>.</li><li>Twitter:&nbsp;<a href="https://twitter.com/MarderLab">@MarderLab</a>.</li><li>Related to our conversation:<ul><li><a href="https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002147">Understanding Brains: Details, Intuition, and Big Data</a>.</li><li><a href="http://sites.iiserpune.ac.in/~raghav/pdfs/animalbehavior/ReadingList/getting_1989.pdf">Emerging principles governing the operation of neural networks</a> (Eve mentions this regarding "building blocks" of neural networks).</li></ul></li></ul>





<p>0:00 - Intro
3:58 - Background
8:00 - Levels of ambiguity
9:47 - Stomatogastric nervous system
17:13 - Structure vs. function
26:08 - Role of theory
34:56 - Technology vs. understanding
38:25 - Higher cognitive function
44:35 - Adaptability, resilience, evolution
50:23 - Climate change
56:11 - Deep learning
57:12 - Dynamical systems</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.





Eve discusses many of the lessons she has learned studying a small nervous system, the crustacean stomatogastric nervous system (STG). The STG has only about 30 neu]]></itunes:subtitle>
	<content:encoded><![CDATA[<a href="https://www.patreon.com/braininspired"></a>



<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>





<p>Eve discusses many of the lessons she has learned studying a small nervous system, the crustacean stomatogastric nervous system (STG). The STG has only about 30 neurons and its connections and neurophysiology are well-understood. Yet Eve's work has shown it functions under a remarkable diversity of conditions, and does so is a remarkable variety of ways. We discuss her work on the STG specifically, and what her work implies about trying to study much larger nervous systems, like our human brains.</p>



<ul class="wp-block-list"><li><a href="http://blogs.brandeis.edu/marderlab/">The Marder Lab</a>.</li><li>Twitter:&nbsp;<a href="https://twitter.com/MarderLab">@MarderLab</a>.</li><li>Related to our conversation:<ul><li><a href="https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002147">Understanding Brains: Details, Intuition, and Big Data</a>.</li><li><a href="http://sites.iiserpune.ac.in/~raghav/pdfs/animalbehavior/ReadingList/getting_1989.pdf">Emerging principles governing the operation of neural networks</a> (Eve mentions this regarding "building blocks" of neural networks).</li></ul></li></ul>





<p>0:00 - Intro
3:58 - Background
8:00 - Levels of ambiguity
9:47 - Stomatogastric nervous system
17:13 - Structure vs. function
26:08 - Role of theory
34:56 - Technology vs. understanding
38:25 - Higher cognitive function
44:35 - Adaptability, resilience, evolution
50:23 - Climate change
56:11 - Deep learning
57:12 - Dynamical systems</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1695/130.mp3" length="58792390" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.





Eve discusses many of the lessons she has learned studying a small nervous system, the crustacean stomatogastric nervous system (STG). The STG has only about 30 neurons and its connections and neurophysiology are well-understood. Yet Eve's work has shown it functions under a remarkable diversity of conditions, and does so is a remarkable variety of ways. We discuss her work on the STG specifically, and what her work implies about trying to study much larger nervous systems, like our human brains.



The Marder Lab.Twitter:&nbsp;@MarderLab.Related to our conversation:Understanding Brains: Details, Intuition, and Big Data.Emerging principles governing the operation of neural networks (Eve mentions this regarding "building blocks" of neural networks).





0:00 - Intro
3:58 - Background
8:00 - Levels of ambiguity
9:47 - Stomatogastric nervous system
17:13 - Structure vs. function
26:08 - Role of theory
34:56 - Technology vs. understanding
38:25 - Higher cognitive function
44:35 - Adaptability, resilience, evolution
50:23 - Climate change
56:11 - Deep learning
57:12 - Dynamical systems]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2022/03/art-130-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:00:56</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.





Eve discusses many of the lessons she has learned studying a small nervous system, the crustacean stomatogastric nervous system (STG). The STG has only about 30 neurons and its connections and neurophysiology are well-understood. Yet Eve's work has shown it functions under a remarkable diversity of conditions, and does so is a remarkable variety of ways. We discuss her work on the STG specifically, and what her work implies about trying to study much larger nervous systems, like our human brains.



The Marder Lab.Twitter:&nbsp;@MarderLab.Related to our conversation:Understanding Brains: Details, Intuition, and Big Data.Emerging principles governing the operation of neural networks (Eve mentions this regarding "building blocks" of neural networks).





0:00 - Intro
3:58 - Background
8:00 - Levels of ambiguity
9:47 - Stomatogastric nervous system
17:13 - Structure vs. function
26:08 - Role of the]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2022/03/art-130-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 129 Patryk Laurent: Learning from the Real World</title>
	<link>https://braininspired.co/podcast/129/</link>
	<pubDate>Wed, 02 Mar 2022 16:02:10 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1681</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Patryk and I discuss his wide-ranging background working in both the neuroscience and AI worlds, and his resultant perspective on what's needed to move forward in AI, including some principles of brain processes that are more and less important. We also discuss his own work using some of those principles to help deep learning generalize to better capture how humans behave in and perceive the world.</p>



<ul class="wp-block-list"><li><a href="https://pakl.net/">Patryk's homepage</a>.</li><li>Twitter:&nbsp;<a href="https://twitter.com/paklnet">@paklnet</a>.</li><li>Related papers<ul><li><a href="https://arxiv.org/abs/1607.06854">Unsupervised Learning from Continuous Video in a Scalable Predictive Recurrent Network</a>.</li></ul></li></ul>





<p>0:00 - Intro
2:22 - Patryk's background
8:37 - Importance of diverse skills
16:14 - What is intelligence?
20:34 - Important brain principles
22:36 - Learning from the real world
35:09 - Language models
42:51 - AI contribution to neuroscience
48:22 - Criteria for "real" AI
53:11 - Neuroscience for AI
1:01:20 - What can we ignore about brains?
1:11:45 - Advice to past self</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Patryk and I discuss his wide-ranging background working in both the neuroscience and AI worlds, and his resultant perspective on whats needed to move forward i]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Patryk and I discuss his wide-ranging background working in both the neuroscience and AI worlds, and his resultant perspective on what's needed to move forward in AI, including some principles of brain processes that are more and less important. We also discuss his own work using some of those principles to help deep learning generalize to better capture how humans behave in and perceive the world.</p>



<ul class="wp-block-list"><li><a href="https://pakl.net/">Patryk's homepage</a>.</li><li>Twitter:&nbsp;<a href="https://twitter.com/paklnet">@paklnet</a>.</li><li>Related papers<ul><li><a href="https://arxiv.org/abs/1607.06854">Unsupervised Learning from Continuous Video in a Scalable Predictive Recurrent Network</a>.</li></ul></li></ul>





<p>0:00 - Intro
2:22 - Patryk's background
8:37 - Importance of diverse skills
16:14 - What is intelligence?
20:34 - Important brain principles
22:36 - Learning from the real world
35:09 - Language models
42:51 - AI contribution to neuroscience
48:22 - Criteria for "real" AI
53:11 - Neuroscience for AI
1:01:20 - What can we ignore about brains?
1:11:45 - Advice to past self</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1681/129.mp3" length="78083383" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Patryk and I discuss his wide-ranging background working in both the neuroscience and AI worlds, and his resultant perspective on what's needed to move forward in AI, including some principles of brain processes that are more and less important. We also discuss his own work using some of those principles to help deep learning generalize to better capture how humans behave in and perceive the world.



Patryk's homepage.Twitter:&nbsp;@paklnet.Related papersUnsupervised Learning from Continuous Video in a Scalable Predictive Recurrent Network.





0:00 - Intro
2:22 - Patryk's background
8:37 - Importance of diverse skills
16:14 - What is intelligence?
20:34 - Important brain principles
22:36 - Learning from the real world
35:09 - Language models
42:51 - AI contribution to neuroscience
48:22 - Criteria for "real" AI
53:11 - Neuroscience for AI
1:01:20 - What can we ignore about brains?
1:11:45 - Advice to past self]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2022/03/art-129-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:21:01</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Patryk and I discuss his wide-ranging background working in both the neuroscience and AI worlds, and his resultant perspective on what's needed to move forward in AI, including some principles of brain processes that are more and less important. We also discuss his own work using some of those principles to help deep learning generalize to better capture how humans behave in and perceive the world.



Patryk's homepage.Twitter:&nbsp;@paklnet.Related papersUnsupervised Learning from Continuous Video in a Scalable Predictive Recurrent Network.





0:00 - Intro
2:22 - Patryk's background
8:37 - Importance of diverse skills
16:14 - What is intelligence?
20:34 - Important brain principles
22:36 - Learning from the real world
35:09 - Language models
42:51 - AI contribution to neuroscience
48:22 - Criteria for "real" AI
53:11 - Neuroscience for AI
1:01:20 - What can we ignore about brains?
1:11:45 ]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2022/03/art-129-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 128 Hakwan Lau: In Consciousness We Trust</title>
	<link>https://braininspired.co/podcast/128/</link>
	<pubDate>Sun, 20 Feb 2022 16:44:01 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1670</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Hakwan and I discuss many of the topics in his new book, <a href="https://www.amazon.com/gp/product/B09RBB5LBW/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=B09RBB5LBW&amp;linkId=04019273dfa08820df5aafdbdef49d20">In Consciousness we Trust: The Cognitive Neuroscience of Subjective Experience</a>. Hakwan describes his perceptual reality monitoring theory of consciousness, which suggests consciousness may act as a systems check between our sensory perceptions and higher cognitive functions. We also discuss his latest thoughts on mental quality space and how it relates to perceptual reality monitoring. Among many other topics, we chat about the many confounds and challenges to empirically studying consciousness, a topic featured heavily in the first half of his book. Hakwan was on a previous episode with Steve Fleming, <a href="https://braininspired.co/podcast/99/">BI 099 Hakwan Lau and Steve Fleming: Neuro-AI Consciousness</a>.</p>





<ul class="wp-block-list"><li>Hakwan's lab: <a href="https://sites.google.com/view/hakwan-lau-lab">Consciousness and Metacognition Lab</a>.</li><li>Twitter: <a href="https://twitter.com/hakwanlau">@hakwanlau</a>.</li><li>Book:<ul><li><a href="https://www.amazon.com/gp/product/B09RBB5LBW/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=B09RBB5LBW&amp;linkId=04019273dfa08820df5aafdbdef49d20">In Consciousness we Trust: The Cognitive Neuroscience of Subjective Experience</a>.</li></ul></li></ul>





<p>0:00 - Intro
4:37 - In Consciousness We Trust
12:19 - Too many consciousness theories?
19:26 - Philosophy and neuroscience of consciousness
29:00 - Local vs. global theories
31:20 - Perceptual reality monitoring and GANs
42:43 - Functions of consciousness
47:17 - Mental quality space
56:44 - Cognitive maps
1:06:28 - Performance capacity confounds
1:12:28 - Blindsight
1:19:11 - Philosophy vs. empirical work</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Hakwan and I discuss many of the topics in his new book, In Consciousness we Trust: The Cognitive Neuroscience of Subjective Experience. Hakwan describes his pe]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Hakwan and I discuss many of the topics in his new book, <a href="https://www.amazon.com/gp/product/B09RBB5LBW/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=B09RBB5LBW&amp;linkId=04019273dfa08820df5aafdbdef49d20">In Consciousness we Trust: The Cognitive Neuroscience of Subjective Experience</a>. Hakwan describes his perceptual reality monitoring theory of consciousness, which suggests consciousness may act as a systems check between our sensory perceptions and higher cognitive functions. We also discuss his latest thoughts on mental quality space and how it relates to perceptual reality monitoring. Among many other topics, we chat about the many confounds and challenges to empirically studying consciousness, a topic featured heavily in the first half of his book. Hakwan was on a previous episode with Steve Fleming, <a href="https://braininspired.co/podcast/99/">BI 099 Hakwan Lau and Steve Fleming: Neuro-AI Consciousness</a>.</p>





<ul class="wp-block-list"><li>Hakwan's lab: <a href="https://sites.google.com/view/hakwan-lau-lab">Consciousness and Metacognition Lab</a>.</li><li>Twitter: <a href="https://twitter.com/hakwanlau">@hakwanlau</a>.</li><li>Book:<ul><li><a href="https://www.amazon.com/gp/product/B09RBB5LBW/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=B09RBB5LBW&amp;linkId=04019273dfa08820df5aafdbdef49d20">In Consciousness we Trust: The Cognitive Neuroscience of Subjective Experience</a>.</li></ul></li></ul>





<p>0:00 - Intro
4:37 - In Consciousness We Trust
12:19 - Too many consciousness theories?
19:26 - Philosophy and neuroscience of consciousness
29:00 - Local vs. global theories
31:20 - Perceptual reality monitoring and GANs
42:43 - Functions of consciousness
47:17 - Mental quality space
56:44 - Cognitive maps
1:06:28 - Performance capacity confounds
1:12:28 - Blindsight
1:19:11 - Philosophy vs. empirical work</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1670/128.mp3" length="82548911" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Hakwan and I discuss many of the topics in his new book, In Consciousness we Trust: The Cognitive Neuroscience of Subjective Experience. Hakwan describes his perceptual reality monitoring theory of consciousness, which suggests consciousness may act as a systems check between our sensory perceptions and higher cognitive functions. We also discuss his latest thoughts on mental quality space and how it relates to perceptual reality monitoring. Among many other topics, we chat about the many confounds and challenges to empirically studying consciousness, a topic featured heavily in the first half of his book. Hakwan was on a previous episode with Steve Fleming, BI 099 Hakwan Lau and Steve Fleming: Neuro-AI Consciousness.





Hakwan's lab: Consciousness and Metacognition Lab.Twitter: @hakwanlau.Book:In Consciousness we Trust: The Cognitive Neuroscience of Subjective Experience.





0:00 - Intro
4:37 - In Consciousness We Trust
12:19 - Too many consciousness theories?
19:26 - Philosophy and neuroscience of consciousness
29:00 - Local vs. global theories
31:20 - Perceptual reality monitoring and GANs
42:43 - Functions of consciousness
47:17 - Mental quality space
56:44 - Cognitive maps
1:06:28 - Performance capacity confounds
1:12:28 - Blindsight
1:19:11 - Philosophy vs. empirical work]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2022/02/art-128-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:25:40</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Hakwan and I discuss many of the topics in his new book, In Consciousness we Trust: The Cognitive Neuroscience of Subjective Experience. Hakwan describes his perceptual reality monitoring theory of consciousness, which suggests consciousness may act as a systems check between our sensory perceptions and higher cognitive functions. We also discuss his latest thoughts on mental quality space and how it relates to perceptual reality monitoring. Among many other topics, we chat about the many confounds and challenges to empirically studying consciousness, a topic featured heavily in the first half of his book. Hakwan was on a previous episode with Steve Fleming, BI 099 Hakwan Lau and Steve Fleming: Neuro-AI Consciousness.





Hakwan's lab: Consciousness and Metacognition Lab.Twitter: @hakwanlau.Book:In Consciousness we Trust: The Cognitive Neuroscience of Subjective Experience.





0:00 - Intro]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2022/02/art-128-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 127 Tomás Ryan: Memory, Instinct, and Forgetting</title>
	<link>https://braininspired.co/podcast/127/</link>
	<pubDate>Thu, 10 Feb 2022 16:26:38 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1636</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Tomás and I discuss his research and ideas on how memories are encoded (the engram), the role of forgetting, and the overlapping mechanisms of memory and instinct. Tomás uses otpogenetics and other techniques to label and control neurons involved in learning and memory, and has shown that forgotten memories can be restored by stimulating "engram cells" originally associated with the forgotten memory. This line of research has led Tomás to think forgetting might be a learning mechanism itself, a adaption our brains make based on the predictability and affordances of the environment. His work on engrams has also led Tomás to think our instincts (ingrams) may share the same mechanism of our memories (engrams), and that memories may transition to instincts across generations. We begin by addressing Randy Gallistel's engram ideas from the previous episode: <a href="https://braininspired.co/podcast/126/" target="_blank" rel="noreferrer noopener">BI 126 Randy Gallistel: Where Is the Engram?</a></p>



<ul class="wp-block-list"><li><a href="https://ryan-lab.org/tomas-ryan/">Ryan Lab</a>.</li><li>Twitter:&nbsp;<a href="https://twitter.com/tjryan_77?lang=en">@TJRyan_77</a>.</li><li>Related papers<ul><li><a href="https://www.sciencedirect.com/science/article/pii/S0959438821000088">Engram cell connectivity: an evolving substrate for information storage</a>.</li><li><a href="https://pubmed.ncbi.nlm.nih.gov/35027710/">Forgetting as a form of adaptive engram cell plasticity.</a></li><li>Memory and Instinct as a Continuum of Information Storage in <a href="https://www.amazon.com/gp/product/0262043254/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=0262043254&amp;linkId=7104c2dad5abdbb3909d4d8e4f8dd8e0">The Cognitive Neurosciences.</a></li><li><a href="https://monoskop.org/images/2/2f/Shannon_Claude_E_1956_The_Bandwagon.pdf">The Bandwagon</a> by Claude Shannon.</li></ul></li></ul>





<p>0:00 - Intro
4:05 - Response to Randy Gallistel
10:45 - Computation in the brain
14:52 - Instinct and memory
19:37 - Dynamics of memory
21:55 - Wiring vs. connection strength plasticity
24:16 - Changing one's mind
33:09 - Optogenetics and memory experiments
47:24 - Forgetting as learning
1:06:35 - Folk psychological terms
1:08:49 - Memory becoming instinct
1:21:49 - Instinct across the lifetime
1:25:52 - Boundaries of memories
1:28:52 - Subjective experience of memory
1:31:58 - Interdisciplinary research
1:37:32 - Communicating science</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Tomás and I discuss his research and ideas on how memories are encoded (the engram), the role of forgetting, and the overlapping mechanisms of memory and instin]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Tomás and I discuss his research and ideas on how memories are encoded (the engram), the role of forgetting, and the overlapping mechanisms of memory and instinct. Tomás uses otpogenetics and other techniques to label and control neurons involved in learning and memory, and has shown that forgotten memories can be restored by stimulating "engram cells" originally associated with the forgotten memory. This line of research has led Tomás to think forgetting might be a learning mechanism itself, a adaption our brains make based on the predictability and affordances of the environment. His work on engrams has also led Tomás to think our instincts (ingrams) may share the same mechanism of our memories (engrams), and that memories may transition to instincts across generations. We begin by addressing Randy Gallistel's engram ideas from the previous episode: <a href="https://braininspired.co/podcast/126/" target="_blank" rel="noreferrer noopener">BI 126 Randy Gallistel: Where Is the Engram?</a></p>



<ul class="wp-block-list"><li><a href="https://ryan-lab.org/tomas-ryan/">Ryan Lab</a>.</li><li>Twitter:&nbsp;<a href="https://twitter.com/tjryan_77?lang=en">@TJRyan_77</a>.</li><li>Related papers<ul><li><a href="https://www.sciencedirect.com/science/article/pii/S0959438821000088">Engram cell connectivity: an evolving substrate for information storage</a>.</li><li><a href="https://pubmed.ncbi.nlm.nih.gov/35027710/">Forgetting as a form of adaptive engram cell plasticity.</a></li><li>Memory and Instinct as a Continuum of Information Storage in <a href="https://www.amazon.com/gp/product/0262043254/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=0262043254&amp;linkId=7104c2dad5abdbb3909d4d8e4f8dd8e0">The Cognitive Neurosciences.</a></li><li><a href="https://monoskop.org/images/2/2f/Shannon_Claude_E_1956_The_Bandwagon.pdf">The Bandwagon</a> by Claude Shannon.</li></ul></li></ul>





<p>0:00 - Intro
4:05 - Response to Randy Gallistel
10:45 - Computation in the brain
14:52 - Instinct and memory
19:37 - Dynamics of memory
21:55 - Wiring vs. connection strength plasticity
24:16 - Changing one's mind
33:09 - Optogenetics and memory experiments
47:24 - Forgetting as learning
1:06:35 - Folk psychological terms
1:08:49 - Memory becoming instinct
1:21:49 - Instinct across the lifetime
1:25:52 - Boundaries of memories
1:28:52 - Subjective experience of memory
1:31:58 - Interdisciplinary research
1:37:32 - Communicating science</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1636/127.mp3" length="98850130" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Tomás and I discuss his research and ideas on how memories are encoded (the engram), the role of forgetting, and the overlapping mechanisms of memory and instinct. Tomás uses otpogenetics and other techniques to label and control neurons involved in learning and memory, and has shown that forgotten memories can be restored by stimulating "engram cells" originally associated with the forgotten memory. This line of research has led Tomás to think forgetting might be a learning mechanism itself, a adaption our brains make based on the predictability and affordances of the environment. His work on engrams has also led Tomás to think our instincts (ingrams) may share the same mechanism of our memories (engrams), and that memories may transition to instincts across generations. We begin by addressing Randy Gallistel's engram ideas from the previous episode: BI 126 Randy Gallistel: Where Is the Engram?



Ryan Lab.Twitter:&nbsp;@TJRyan_77.Related papersEngram cell connectivity: an evolving substrate for information storage.Forgetting as a form of adaptive engram cell plasticity.Memory and Instinct as a Continuum of Information Storage in The Cognitive Neurosciences.The Bandwagon by Claude Shannon.





0:00 - Intro
4:05 - Response to Randy Gallistel
10:45 - Computation in the brain
14:52 - Instinct and memory
19:37 - Dynamics of memory
21:55 - Wiring vs. connection strength plasticity
24:16 - Changing one's mind
33:09 - Optogenetics and memory experiments
47:24 - Forgetting as learning
1:06:35 - Folk psychological terms
1:08:49 - Memory becoming instinct
1:21:49 - Instinct across the lifetime
1:25:52 - Boundaries of memories
1:28:52 - Subjective experience of memory
1:31:58 - Interdisciplinary research
1:37:32 - Communicating science]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2022/02/art-127-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:42:39</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Tomás and I discuss his research and ideas on how memories are encoded (the engram), the role of forgetting, and the overlapping mechanisms of memory and instinct. Tomás uses otpogenetics and other techniques to label and control neurons involved in learning and memory, and has shown that forgotten memories can be restored by stimulating "engram cells" originally associated with the forgotten memory. This line of research has led Tomás to think forgetting might be a learning mechanism itself, a adaption our brains make based on the predictability and affordances of the environment. His work on engrams has also led Tomás to think our instincts (ingrams) may share the same mechanism of our memories (engrams), and that memories may transition to instincts across generations. We begin by addressing Randy Gallistel's engram ideas from the previous episode: BI 126 Randy Gallistel: Where Is the Engr]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2022/02/art-127-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 126 Randy Gallistel: Where Is the Engram?</title>
	<link>https://braininspired.co/podcast/126/</link>
	<pubDate>Mon, 31 Jan 2022 16:57:11 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1631</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Randy and I discuss his long-standing interest in how the brain stores information to compute. That is, where is the engram, the physical trace of memory in the brain? Modern neuroscience is dominated by the view that memories are stored among synaptic connections in populations of neurons. Randy believes a more reasonable and reliable way to store abstract symbols, like numbers, is to write them into code within individual neurons. Thus, the spiking code, whatever it is, functions to write and read memories into and out of intracellular substrates, like polynucleotides (DNA, RNA, e.g.). He lays out his case in detail in his book with Adam King, <a href="https://www.amazon.com/gp/product/1405122889/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=1405122889&amp;linkId=1362bd57c9102eee598fc51ed3aa6126">Memory and the Computational Brain: Why Cognitive Science will Transform Neuroscience.</a> We also talk about some research and theoretical work since then that support his views.</p>





<ul class="wp-block-list"><li>Randy's <a href="https://psych.rutgers.edu/faculty-profiles-a-contacts/96-charles-randy-gallistel">Rutger's website</a>.</li><li>Book:<ul><li><a href="https://www.amazon.com/gp/product/1405122889/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=1405122889&amp;linkId=1362bd57c9102eee598fc51ed3aa6126">Memory and the Computational Brain: Why Cognitive Science will Transform Neuroscience.</a></li></ul></li><li>Related papers:<ul><li>The theoretical RNA paper Randy mentions: <a href="https://arxiv.org/abs/2008.08814v3">An RNA-based theory of natural universal computation</a>.</li><li>Evidence for intracellular engram in cerebellum: <a href="https://www.pnas.org/content/111/41/14930">Memory trace and timing mechanism localized to cerebellar Purkinje cells</a>.</li></ul></li><li><a href="https://youtu.be/JCDaURmKPm4?t=4211">The exchange between Randy and John Lisman</a>.</li><li>The blog post Randy mentions about Universal function approximation:<ul><li><a href="https://www.lifeiscomputation.com/the-truth-about-the-not-so-universal-approximation-theorem/">The Truth About the [Not So] Universal Approximation Theorem</a></li></ul></li></ul>





<p>0:00 - Intro
6:50 - Cognitive science vs. computational neuroscience
13:23 - Brain as computing device
15:45 - Noam Chomsky's influence
17:58 - Memory must be stored within cells
30:58 - Theoretical support for the idea
34:15 - Cerebellum evidence supporting the idea
40:56 - What is the write mechanism?
51:11 - Thoughts on deep learning
1:00:02 - Multiple memory mechanisms?
1:10:56 - The role of plasticity
1:12:06 - Trying to convince molecular biologists</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Randy and I discuss his long-standing interest in how the brain stores information to compute. That is, where is the engram, the physical trace of memory in the]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Randy and I discuss his long-standing interest in how the brain stores information to compute. That is, where is the engram, the physical trace of memory in the brain? Modern neuroscience is dominated by the view that memories are stored among synaptic connections in populations of neurons. Randy believes a more reasonable and reliable way to store abstract symbols, like numbers, is to write them into code within individual neurons. Thus, the spiking code, whatever it is, functions to write and read memories into and out of intracellular substrates, like polynucleotides (DNA, RNA, e.g.). He lays out his case in detail in his book with Adam King, <a href="https://www.amazon.com/gp/product/1405122889/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=1405122889&amp;linkId=1362bd57c9102eee598fc51ed3aa6126">Memory and the Computational Brain: Why Cognitive Science will Transform Neuroscience.</a> We also talk about some research and theoretical work since then that support his views.</p>





<ul class="wp-block-list"><li>Randy's <a href="https://psych.rutgers.edu/faculty-profiles-a-contacts/96-charles-randy-gallistel">Rutger's website</a>.</li><li>Book:<ul><li><a href="https://www.amazon.com/gp/product/1405122889/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=1405122889&amp;linkId=1362bd57c9102eee598fc51ed3aa6126">Memory and the Computational Brain: Why Cognitive Science will Transform Neuroscience.</a></li></ul></li><li>Related papers:<ul><li>The theoretical RNA paper Randy mentions: <a href="https://arxiv.org/abs/2008.08814v3">An RNA-based theory of natural universal computation</a>.</li><li>Evidence for intracellular engram in cerebellum: <a href="https://www.pnas.org/content/111/41/14930">Memory trace and timing mechanism localized to cerebellar Purkinje cells</a>.</li></ul></li><li><a href="https://youtu.be/JCDaURmKPm4?t=4211">The exchange between Randy and John Lisman</a>.</li><li>The blog post Randy mentions about Universal function approximation:<ul><li><a href="https://www.lifeiscomputation.com/the-truth-about-the-not-so-universal-approximation-theorem/">The Truth About the [Not So] Universal Approximation Theorem</a></li></ul></li></ul>





<p>0:00 - Intro
6:50 - Cognitive science vs. computational neuroscience
13:23 - Brain as computing device
15:45 - Noam Chomsky's influence
17:58 - Memory must be stored within cells
30:58 - Theoretical support for the idea
34:15 - Cerebellum evidence supporting the idea
40:56 - What is the write mechanism?
51:11 - Thoughts on deep learning
1:00:02 - Multiple memory mechanisms?
1:10:56 - The role of plasticity
1:12:06 - Trying to convince molecular biologists</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1631/126.mp3" length="77062878" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Randy and I discuss his long-standing interest in how the brain stores information to compute. That is, where is the engram, the physical trace of memory in the brain? Modern neuroscience is dominated by the view that memories are stored among synaptic connections in populations of neurons. Randy believes a more reasonable and reliable way to store abstract symbols, like numbers, is to write them into code within individual neurons. Thus, the spiking code, whatever it is, functions to write and read memories into and out of intracellular substrates, like polynucleotides (DNA, RNA, e.g.). He lays out his case in detail in his book with Adam King, Memory and the Computational Brain: Why Cognitive Science will Transform Neuroscience. We also talk about some research and theoretical work since then that support his views.





Randy's Rutger's website.Book:Memory and the Computational Brain: Why Cognitive Science will Transform Neuroscience.Related papers:The theoretical RNA paper Randy mentions: An RNA-based theory of natural universal computation.Evidence for intracellular engram in cerebellum: Memory trace and timing mechanism localized to cerebellar Purkinje cells.The exchange between Randy and John Lisman.The blog post Randy mentions about Universal function approximation:The Truth About the [Not So] Universal Approximation Theorem





0:00 - Intro
6:50 - Cognitive science vs. computational neuroscience
13:23 - Brain as computing device
15:45 - Noam Chomsky's influence
17:58 - Memory must be stored within cells
30:58 - Theoretical support for the idea
34:15 - Cerebellum evidence supporting the idea
40:56 - What is the write mechanism?
51:11 - Thoughts on deep learning
1:00:02 - Multiple memory mechanisms?
1:10:56 - The role of plasticity
1:12:06 - Trying to convince molecular biologists]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2022/01/art-126-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:19:57</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Randy and I discuss his long-standing interest in how the brain stores information to compute. That is, where is the engram, the physical trace of memory in the brain? Modern neuroscience is dominated by the view that memories are stored among synaptic connections in populations of neurons. Randy believes a more reasonable and reliable way to store abstract symbols, like numbers, is to write them into code within individual neurons. Thus, the spiking code, whatever it is, functions to write and read memories into and out of intracellular substrates, like polynucleotides (DNA, RNA, e.g.). He lays out his case in detail in his book with Adam King, Memory and the Computational Brain: Why Cognitive Science will Transform Neuroscience. We also talk about some research and theoretical work since then that support his views.





Randy's Rutger's website.Book:Memory and the Computational Brain: Why ]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2022/01/art-126-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 125 Doris Tsao, Tony Zador, Blake Richards: NAISys</title>
	<link>https://braininspired.co/podcast/125/</link>
	<pubDate>Wed, 19 Jan 2022 23:19:34 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1625</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p>Doris, Tony, and Blake are the organizers for this year's NAISys conference, <a href="https://meetings.cshl.edu/meetings.aspx?meet=NAISYS&amp;year=22">From Neuroscience to Artificially Intelligent Systems (NAISys)</a>, at Cold Spring Harbor. We discuss the conference itself, some history of the neuroscience and AI interface, their current research interests, and a handful of topics around evolution, innateness, development, learning, and the current and future prospects for using neuroscience to inspire new ideas in artificial intelligence.</p>





<ul class="wp-block-list"><li><a href="https://meetings.cshl.edu/meetings.aspx?meet=NAISYS&amp;year=22">From Neuroscience to Artificially Intelligent Systems (NAISys)</a>.</li><li>Doris:<ul><li><a href="https://twitter.com/doristsao">@doristsao</a>.</li><li><a href="https://www.tsaolab.caltech.edu/">Tsao Lab</a>.</li><li><a href="https://www.nature.com/articles/s41467-021-26751-5.pdf">Unsupervised deep learning identifies semantic disentanglement in single inferotemporal face patch neurons</a>.</li></ul></li><li>Tony:<ul><li><a href="https://twitter.com/TonyZador">@TonyZador</a>.</li><li><a href="http://zadorlab.labsites.cshl.edu/">Zador Lab</a>.</li><li><a href="https://www.biorxiv.org/content/10.1101/582643v1">A Critique of Pure Learning: What Artificial Neural Networks can Learn from Animal Brains</a>.</li></ul></li><li>Blake:<ul><li><a href="https://twitter.com/tyrell_turing">@tyrell_turing</a>.</li><li><a href="https://sites.google.com/mila.quebec/linc-lab/home">The Learning in Neural Circuits Lab</a>.</li><li><a href="https://proceedings.neurips.cc/paper/2021/file/d384dec9f5f7a64a36b5c8f03b8a6d92-Paper.pdf">The functional specialization of visual cortex emerges from training parallel pathways with self-supervised predictive learning.</a></li></ul></li></ul>





<p>0:00 - Intro
4:16 - Tony Zador
5:38 - Doris Tsao
10:44 - Blake Richards
15:46 - Deductive, inductive, abductive inference
16:32 - NAISys
33:09 - Evolution, development, learning
38:23 - Learning: plasticity vs. dynamical structures
54:13 - Different kinds of understanding
1:03:05 - Do we understand evolution well enough?
1:04:03 - Neuro-AI fad?
1:06:26 - Are your problems bigger or smaller now?</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Doris, Tony, and Blake are the organizers for this years NAISys conference, From Neuroscience to Artificially Intelligent Systems (NAISys), at Cold Spring Harbor.]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>



<p>Doris, Tony, and Blake are the organizers for this year's NAISys conference, <a href="https://meetings.cshl.edu/meetings.aspx?meet=NAISYS&amp;year=22">From Neuroscience to Artificially Intelligent Systems (NAISys)</a>, at Cold Spring Harbor. We discuss the conference itself, some history of the neuroscience and AI interface, their current research interests, and a handful of topics around evolution, innateness, development, learning, and the current and future prospects for using neuroscience to inspire new ideas in artificial intelligence.</p>





<ul class="wp-block-list"><li><a href="https://meetings.cshl.edu/meetings.aspx?meet=NAISYS&amp;year=22">From Neuroscience to Artificially Intelligent Systems (NAISys)</a>.</li><li>Doris:<ul><li><a href="https://twitter.com/doristsao">@doristsao</a>.</li><li><a href="https://www.tsaolab.caltech.edu/">Tsao Lab</a>.</li><li><a href="https://www.nature.com/articles/s41467-021-26751-5.pdf">Unsupervised deep learning identifies semantic disentanglement in single inferotemporal face patch neurons</a>.</li></ul></li><li>Tony:<ul><li><a href="https://twitter.com/TonyZador">@TonyZador</a>.</li><li><a href="http://zadorlab.labsites.cshl.edu/">Zador Lab</a>.</li><li><a href="https://www.biorxiv.org/content/10.1101/582643v1">A Critique of Pure Learning: What Artificial Neural Networks can Learn from Animal Brains</a>.</li></ul></li><li>Blake:<ul><li><a href="https://twitter.com/tyrell_turing">@tyrell_turing</a>.</li><li><a href="https://sites.google.com/mila.quebec/linc-lab/home">The Learning in Neural Circuits Lab</a>.</li><li><a href="https://proceedings.neurips.cc/paper/2021/file/d384dec9f5f7a64a36b5c8f03b8a6d92-Paper.pdf">The functional specialization of visual cortex emerges from training parallel pathways with self-supervised predictive learning.</a></li></ul></li></ul>





<p>0:00 - Intro
4:16 - Tony Zador
5:38 - Doris Tsao
10:44 - Blake Richards
15:46 - Deductive, inductive, abductive inference
16:32 - NAISys
33:09 - Evolution, development, learning
38:23 - Learning: plasticity vs. dynamical structures
54:13 - Different kinds of understanding
1:03:05 - Do we understand evolution well enough?
1:04:03 - Neuro-AI fad?
1:06:26 - Are your problems bigger or smaller now?</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1625/125.mp3" length="68551118" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Doris, Tony, and Blake are the organizers for this year's NAISys conference, From Neuroscience to Artificially Intelligent Systems (NAISys), at Cold Spring Harbor. We discuss the conference itself, some history of the neuroscience and AI interface, their current research interests, and a handful of topics around evolution, innateness, development, learning, and the current and future prospects for using neuroscience to inspire new ideas in artificial intelligence.





From Neuroscience to Artificially Intelligent Systems (NAISys).Doris:@doristsao.Tsao Lab.Unsupervised deep learning identifies semantic disentanglement in single inferotemporal face patch neurons.Tony:@TonyZador.Zador Lab.A Critique of Pure Learning: What Artificial Neural Networks can Learn from Animal Brains.Blake:@tyrell_turing.The Learning in Neural Circuits Lab.The functional specialization of visual cortex emerges from training parallel pathways with self-supervised predictive learning.





0:00 - Intro
4:16 - Tony Zador
5:38 - Doris Tsao
10:44 - Blake Richards
15:46 - Deductive, inductive, abductive inference
16:32 - NAISys
33:09 - Evolution, development, learning
38:23 - Learning: plasticity vs. dynamical structures
54:13 - Different kinds of understanding
1:03:05 - Do we understand evolution well enough?
1:04:03 - Neuro-AI fad?
1:06:26 - Are your problems bigger or smaller now?]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2022/01/art-125-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:11:05</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Doris, Tony, and Blake are the organizers for this year's NAISys conference, From Neuroscience to Artificially Intelligent Systems (NAISys), at Cold Spring Harbor. We discuss the conference itself, some history of the neuroscience and AI interface, their current research interests, and a handful of topics around evolution, innateness, development, learning, and the current and future prospects for using neuroscience to inspire new ideas in artificial intelligence.





From Neuroscience to Artificially Intelligent Systems (NAISys).Doris:@doristsao.Tsao Lab.Unsupervised deep learning identifies semantic disentanglement in single inferotemporal face patch neurons.Tony:@TonyZador.Zador Lab.A Critique of Pure Learning: What Artificial Neural Networks can Learn from Animal Brains.Blake:@tyrell_turing.The Learning in Neural Circuits Lab.The functional specialization of visual cortex emerges from trai]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2022/01/art-125-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 124 Peter Robin Hiesinger: The Self-Assembling Brain</title>
	<link>https://braininspired.co/podcast/124/</link>
	<pubDate>Wed, 05 Jan 2022 16:35:41 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1620</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Robin and I discuss many of the ideas in his book <a href="https://www.amazon.com/gp/product/0691181225/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=0691181225&amp;linkId=7d9b2d30de97530b49d1c6e179db0c3f">The Self-Assembling Brain: How Neural Networks Grow Smarter</a>. The premise is that our DNA encodes an algorithmic growth process that unfolds information via time and energy, resulting in a connected neural network (our brains!) imbued with vast amounts of information from the "start". This contrasts with modern deep learning networks, which start with minimal initial information in their connectivity, and instead rely almost solely on learning to gain their function. Robin suggests we won't be able to create anything with close to human-like intelligence unless we build in an algorithmic growth process and an evolutionary selection process to create artificial networks.</p>





<ul class="wp-block-list"><li><a href="http://www.flygen.org/">Hiesinger Neurogenetics Laboratory</a></li><li>Twitter:&nbsp;<a href="https://twitter.com/HiesingerLab">@HiesingerLab.</a></li><li>Book: <a href="https://www.amazon.com/gp/product/0691181225/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=0691181225&amp;linkId=7d9b2d30de97530b49d1c6e179db0c3f">The Self-Assembling Brain: How Neural Networks Grow Smarter</a></li></ul>





<p>0:00 - Intro
3:01 - The Self-Assembling Brain
21:14 - Including growth in networks
27:52 - Information unfolding and algorithmic growth
31:27 - Cellular automata
40:43 - Learning as a continuum of growth
45:01 - Robustness, autonomous agents
49:11 - Metabolism vs. connectivity
58:00 - Feedback at all levels
1:05:32 - Generality vs. specificity
1:10:36 - Whole brain emulation
1:20:38 - Changing view of intelligence
1:26:34 - Popular and wrong vs. unknown and right</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Robin and I discuss many of the ideas in his book The Self-Assembling Brain: How Neural Networks Grow Smarter. The premise is that our DNA encodes an algorithmi]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Robin and I discuss many of the ideas in his book <a href="https://www.amazon.com/gp/product/0691181225/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=0691181225&amp;linkId=7d9b2d30de97530b49d1c6e179db0c3f">The Self-Assembling Brain: How Neural Networks Grow Smarter</a>. The premise is that our DNA encodes an algorithmic growth process that unfolds information via time and energy, resulting in a connected neural network (our brains!) imbued with vast amounts of information from the "start". This contrasts with modern deep learning networks, which start with minimal initial information in their connectivity, and instead rely almost solely on learning to gain their function. Robin suggests we won't be able to create anything with close to human-like intelligence unless we build in an algorithmic growth process and an evolutionary selection process to create artificial networks.</p>





<ul class="wp-block-list"><li><a href="http://www.flygen.org/">Hiesinger Neurogenetics Laboratory</a></li><li>Twitter:&nbsp;<a href="https://twitter.com/HiesingerLab">@HiesingerLab.</a></li><li>Book: <a href="https://www.amazon.com/gp/product/0691181225/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=0691181225&amp;linkId=7d9b2d30de97530b49d1c6e179db0c3f">The Self-Assembling Brain: How Neural Networks Grow Smarter</a></li></ul>





<p>0:00 - Intro
3:01 - The Self-Assembling Brain
21:14 - Including growth in networks
27:52 - Information unfolding and algorithmic growth
31:27 - Cellular automata
40:43 - Learning as a continuum of growth
45:01 - Robustness, autonomous agents
49:11 - Metabolism vs. connectivity
58:00 - Feedback at all levels
1:05:32 - Generality vs. specificity
1:10:36 - Whole brain emulation
1:20:38 - Changing view of intelligence
1:26:34 - Popular and wrong vs. unknown and right</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1620/124.mp3" length="95781247" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Robin and I discuss many of the ideas in his book The Self-Assembling Brain: How Neural Networks Grow Smarter. The premise is that our DNA encodes an algorithmic growth process that unfolds information via time and energy, resulting in a connected neural network (our brains!) imbued with vast amounts of information from the "start". This contrasts with modern deep learning networks, which start with minimal initial information in their connectivity, and instead rely almost solely on learning to gain their function. Robin suggests we won't be able to create anything with close to human-like intelligence unless we build in an algorithmic growth process and an evolutionary selection process to create artificial networks.





Hiesinger Neurogenetics LaboratoryTwitter:&nbsp;@HiesingerLab.Book: The Self-Assembling Brain: How Neural Networks Grow Smarter





0:00 - Intro
3:01 - The Self-Assembling Brain
21:14 - Including growth in networks
27:52 - Information unfolding and algorithmic growth
31:27 - Cellular automata
40:43 - Learning as a continuum of growth
45:01 - Robustness, autonomous agents
49:11 - Metabolism vs. connectivity
58:00 - Feedback at all levels
1:05:32 - Generality vs. specificity
1:10:36 - Whole brain emulation
1:20:38 - Changing view of intelligence
1:26:34 - Popular and wrong vs. unknown and right]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2022/01/art-124-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:39:27</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Robin and I discuss many of the ideas in his book The Self-Assembling Brain: How Neural Networks Grow Smarter. The premise is that our DNA encodes an algorithmic growth process that unfolds information via time and energy, resulting in a connected neural network (our brains!) imbued with vast amounts of information from the "start". This contrasts with modern deep learning networks, which start with minimal initial information in their connectivity, and instead rely almost solely on learning to gain their function. Robin suggests we won't be able to create anything with close to human-like intelligence unless we build in an algorithmic growth process and an evolutionary selection process to create artificial networks.





Hiesinger Neurogenetics LaboratoryTwitter:&nbsp;@HiesingerLab.Book: The Self-Assembling Brain: How Neural Networks Grow Smarter





0:00 - Intro
3:01 - The Self-Assembling]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2022/01/art-124-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 123 Irina Rish: Continual Learning</title>
	<link>https://braininspired.co/podcast/123/</link>
	<pubDate>Sun, 26 Dec 2021 15:46:13 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1615</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Irina is a faculty member at MILA-Quebec AI Institute and a professor at Université de Montréal. She has worked from both ends of the neuroscience/AI interface, using AI for neuroscience applications, and using neural principles to help improve AI. We discuss her work on biologically-plausible alternatives to back-propagation, using "auxiliary variables" in addition to the normal connection weight updates. We also discuss the world of lifelong learning, which seeks to train networks in an online manner to improve on any tasks as they are introduced. Catastrophic forgetting is an obstacle in modern deep learning, where a network forgets old tasks when it is trained on new tasks. Lifelong learning strategies, like continual learning, transfer learning, and meta-learning seek to overcome catastrophic forgetting, and we talk about some of the inspirations from neuroscience being used to help lifelong learning in networks.</p>



<ul class="wp-block-list"><li><a href="https://sites.google.com/site/irinarish/">Irina's website.</a></li><li>Twitter:&nbsp;<a href="https://twitter.com/irinarish">@irinarish</a></li><li>Related papers:<ul><li><a href="https://arxiv.org/abs/1806.09077">Beyond Backprop: Online Alternating Minimization with Auxiliary Variables</a>.</li><li><a href="https://arxiv.org/pdf/2012.13490.pdf">Towards Continual Reinforcement Learning: A Review and Perspectives</a>.</li></ul></li><li>Lifelong learning video tutorial: <a href="https://www.youtube.com/watch?v=5wwbOBFBMbs">DLRL Summer School 2021 - Lifelong Learning - Irina Rish</a>.</li></ul>





<p>0:00 - Intro
3:26 - AI for Neuro, Neuro for AI
14:59 - Utility of philosophy
20:51 - Artificial general intelligence
24:34 - Back-propagation alternatives
35:10 - Inductive bias vs. scaling generic architectures
45:51 - Continual learning
59:54 - Neuro-inspired continual learning
1:06:57 - Learning trajectories</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Irina is a faculty member at MILA-Quebec AI Institute and a professor at Université de Montréal. She has worked from both ends of the neuroscience/AI interface,]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Irina is a faculty member at MILA-Quebec AI Institute and a professor at Université de Montréal. She has worked from both ends of the neuroscience/AI interface, using AI for neuroscience applications, and using neural principles to help improve AI. We discuss her work on biologically-plausible alternatives to back-propagation, using "auxiliary variables" in addition to the normal connection weight updates. We also discuss the world of lifelong learning, which seeks to train networks in an online manner to improve on any tasks as they are introduced. Catastrophic forgetting is an obstacle in modern deep learning, where a network forgets old tasks when it is trained on new tasks. Lifelong learning strategies, like continual learning, transfer learning, and meta-learning seek to overcome catastrophic forgetting, and we talk about some of the inspirations from neuroscience being used to help lifelong learning in networks.</p>



<ul class="wp-block-list"><li><a href="https://sites.google.com/site/irinarish/">Irina's website.</a></li><li>Twitter:&nbsp;<a href="https://twitter.com/irinarish">@irinarish</a></li><li>Related papers:<ul><li><a href="https://arxiv.org/abs/1806.09077">Beyond Backprop: Online Alternating Minimization with Auxiliary Variables</a>.</li><li><a href="https://arxiv.org/pdf/2012.13490.pdf">Towards Continual Reinforcement Learning: A Review and Perspectives</a>.</li></ul></li><li>Lifelong learning video tutorial: <a href="https://www.youtube.com/watch?v=5wwbOBFBMbs">DLRL Summer School 2021 - Lifelong Learning - Irina Rish</a>.</li></ul>





<p>0:00 - Intro
3:26 - AI for Neuro, Neuro for AI
14:59 - Utility of philosophy
20:51 - Artificial general intelligence
24:34 - Back-propagation alternatives
35:10 - Inductive bias vs. scaling generic architectures
45:51 - Continual learning
59:54 - Neuro-inspired continual learning
1:06:57 - Learning trajectories</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1615/123.mp3" length="76124999" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Irina is a faculty member at MILA-Quebec AI Institute and a professor at Université de Montréal. She has worked from both ends of the neuroscience/AI interface, using AI for neuroscience applications, and using neural principles to help improve AI. We discuss her work on biologically-plausible alternatives to back-propagation, using "auxiliary variables" in addition to the normal connection weight updates. We also discuss the world of lifelong learning, which seeks to train networks in an online manner to improve on any tasks as they are introduced. Catastrophic forgetting is an obstacle in modern deep learning, where a network forgets old tasks when it is trained on new tasks. Lifelong learning strategies, like continual learning, transfer learning, and meta-learning seek to overcome catastrophic forgetting, and we talk about some of the inspirations from neuroscience being used to help lifelong learning in networks.



Irina's website.Twitter:&nbsp;@irinarishRelated papers:Beyond Backprop: Online Alternating Minimization with Auxiliary Variables.Towards Continual Reinforcement Learning: A Review and Perspectives.Lifelong learning video tutorial: DLRL Summer School 2021 - Lifelong Learning - Irina Rish.





0:00 - Intro
3:26 - AI for Neuro, Neuro for AI
14:59 - Utility of philosophy
20:51 - Artificial general intelligence
24:34 - Back-propagation alternatives
35:10 - Inductive bias vs. scaling generic architectures
45:51 - Continual learning
59:54 - Neuro-inspired continual learning
1:06:57 - Learning trajectories]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2021/12/art-123-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:18:59</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Irina is a faculty member at MILA-Quebec AI Institute and a professor at Université de Montréal. She has worked from both ends of the neuroscience/AI interface, using AI for neuroscience applications, and using neural principles to help improve AI. We discuss her work on biologically-plausible alternatives to back-propagation, using "auxiliary variables" in addition to the normal connection weight updates. We also discuss the world of lifelong learning, which seeks to train networks in an online manner to improve on any tasks as they are introduced. Catastrophic forgetting is an obstacle in modern deep learning, where a network forgets old tasks when it is trained on new tasks. Lifelong learning strategies, like continual learning, transfer learning, and meta-learning seek to overcome catastrophic forgetting, and we talk about some of the inspirations from neuroscience being used to help life]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2021/12/art-123-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 122 Kohitij Kar: Visual Intelligence</title>
	<link>https://braininspired.co/podcast/122/</link>
	<pubDate>Sun, 12 Dec 2021 22:44:37 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1611</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong>Support the show to get full episodes and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Ko and I discuss a range of topics around his work to understand our visual intelligence. Ko was a postdoc in <a href="https://braininspired.co/podcast/75/">James Dicarlo's</a> lab, where he helped develop the convolutional neural network models that have become the standard for explaining core object recognition. He is starting his own lab at York University, where he will continue to expand and refine the models, adding important biological details and incorporating models for brain areas outside the ventral visual stream. He will also continue recording neural activity, and performing perturbation studies to better understand the networks involved in our visual cognition.</p>



<ul class="wp-block-list"><li><a href="https://vital-kolab.org/">VISUAL INTELLIGENCE AND TECHNOLOGICAL ADVANCES LAB</a></li><li>Twitter:&nbsp;<a href="https://twitter.com/KohitijKar">@KohitijKar</a>.</li><li>Related papers<ul><li><a href="https://www.biorxiv.org/content/10.1101/354753v1.full.pdf">Evidence that recurrent circuits are critical to the ventral stream’s execution of core object recognition behavior.</a></li><li><a href="https://www.gwern.net/docs/ai/2019-bashivan.pdf">Neural population control via deep image synthesis</a>.</li></ul></li><li><a href="https://braininspired.co/podcast/75/">BI 075 Jim DiCarlo: Reverse Engineering Vision</a></li></ul>





<p>0:00 - Intro
3:49 - Background
13:51 - Where are we in understanding vision?
19:46 - Benchmarks
21:21 - Falsifying models
23:19 - Modeling vs. experiment speed
29:26 - Simple vs complex models
35:34 - Dorsal visual stream and deep learning
44:10 - Modularity and brain area roles
50:58 - Chemogenetic perturbation, DREADDs
57:10 - Future lab vision, clinical applications
1:03:55 - Controlling visual neurons via image synthesis
1:12:14 - Is it enough to study nonhuman animals?
1:18:55 - Neuro/AI intersection
1:26:54 - What is intelligence?</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes and join the Discord community.









Ko and I discuss a range of topics around his work to understand our visual intelligence. Ko was a postdoc in James Dicarlos lab, where he helped develop the convolutional neu]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong>Support the show to get full episodes and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Ko and I discuss a range of topics around his work to understand our visual intelligence. Ko was a postdoc in <a href="https://braininspired.co/podcast/75/">James Dicarlo's</a> lab, where he helped develop the convolutional neural network models that have become the standard for explaining core object recognition. He is starting his own lab at York University, where he will continue to expand and refine the models, adding important biological details and incorporating models for brain areas outside the ventral visual stream. He will also continue recording neural activity, and performing perturbation studies to better understand the networks involved in our visual cognition.</p>



<ul class="wp-block-list"><li><a href="https://vital-kolab.org/">VISUAL INTELLIGENCE AND TECHNOLOGICAL ADVANCES LAB</a></li><li>Twitter:&nbsp;<a href="https://twitter.com/KohitijKar">@KohitijKar</a>.</li><li>Related papers<ul><li><a href="https://www.biorxiv.org/content/10.1101/354753v1.full.pdf">Evidence that recurrent circuits are critical to the ventral stream’s execution of core object recognition behavior.</a></li><li><a href="https://www.gwern.net/docs/ai/2019-bashivan.pdf">Neural population control via deep image synthesis</a>.</li></ul></li><li><a href="https://braininspired.co/podcast/75/">BI 075 Jim DiCarlo: Reverse Engineering Vision</a></li></ul>





<p>0:00 - Intro
3:49 - Background
13:51 - Where are we in understanding vision?
19:46 - Benchmarks
21:21 - Falsifying models
23:19 - Modeling vs. experiment speed
29:26 - Simple vs complex models
35:34 - Dorsal visual stream and deep learning
44:10 - Modularity and brain area roles
50:58 - Chemogenetic perturbation, DREADDs
57:10 - Future lab vision, clinical applications
1:03:55 - Controlling visual neurons via image synthesis
1:12:14 - Is it enough to study nonhuman animals?
1:18:55 - Neuro/AI intersection
1:26:54 - What is intelligence?</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1611/122.mp3" length="89867309" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes and join the Discord community.









Ko and I discuss a range of topics around his work to understand our visual intelligence. Ko was a postdoc in James Dicarlo's lab, where he helped develop the convolutional neural network models that have become the standard for explaining core object recognition. He is starting his own lab at York University, where he will continue to expand and refine the models, adding important biological details and incorporating models for brain areas outside the ventral visual stream. He will also continue recording neural activity, and performing perturbation studies to better understand the networks involved in our visual cognition.



VISUAL INTELLIGENCE AND TECHNOLOGICAL ADVANCES LABTwitter:&nbsp;@KohitijKar.Related papersEvidence that recurrent circuits are critical to the ventral stream’s execution of core object recognition behavior.Neural population control via deep image synthesis.BI 075 Jim DiCarlo: Reverse Engineering Vision





0:00 - Intro
3:49 - Background
13:51 - Where are we in understanding vision?
19:46 - Benchmarks
21:21 - Falsifying models
23:19 - Modeling vs. experiment speed
29:26 - Simple vs complex models
35:34 - Dorsal visual stream and deep learning
44:10 - Modularity and brain area roles
50:58 - Chemogenetic perturbation, DREADDs
57:10 - Future lab vision, clinical applications
1:03:55 - Controlling visual neurons via image synthesis
1:12:14 - Is it enough to study nonhuman animals?
1:18:55 - Neuro/AI intersection
1:26:54 - What is intelligence?]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2021/12/art-122-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:33:18</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes and join the Discord community.









Ko and I discuss a range of topics around his work to understand our visual intelligence. Ko was a postdoc in James Dicarlo's lab, where he helped develop the convolutional neural network models that have become the standard for explaining core object recognition. He is starting his own lab at York University, where he will continue to expand and refine the models, adding important biological details and incorporating models for brain areas outside the ventral visual stream. He will also continue recording neural activity, and performing perturbation studies to better understand the networks involved in our visual cognition.



VISUAL INTELLIGENCE AND TECHNOLOGICAL ADVANCES LABTwitter:&nbsp;@KohitijKar.Related papersEvidence that recurrent circuits are critical to the ventral stream’s execution of core object recognition behavior.Neural population control via deep image synthesis.BI 075 Jim DiCarlo: Reverse]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2021/12/art-122-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 121 Mac Shine: Systems Neurobiology</title>
	<link>https://braininspired.co/podcast/121/</link>
	<pubDate>Thu, 02 Dec 2021 17:24:16 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1606</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Mac and I discuss his systems level approach to understanding brains, and his theoretical work suggesting important roles for the thalamus, basal ganglia, and cerebellum, shifting the dynamical landscape of brain function within varying behavioral contexts. We also discuss his recent interest in the ascending arousal system and neuromodulators. Mac thinks the neocortex has been the sole focus of too much neuroscience research, and that the subcortical brain regions and circuits have a much larger role underlying our intelligence.</p>



<ul class="wp-block-list"><li><a href="https://shine-lab.org/">Shine Lab</a></li><li>Twitter:&nbsp;<a href="https://twitter.com/jmacshine">@jmacshine</a></li><li>Related papers<ul><li><a href="https://shine-lab.org/wp-content/uploads/2021/09/2020_progneuro.pdf">The thalamus integrates the macrosystems of the brain to facilitate complex, adaptive brain network dynamics</a>.</li><li><a href="https://shine-lab.org/wp-content/uploads/2021/09/2021_natureneuro.pdf">Computational models link cellular mechanisms of neuromodulation to large-scale neural dynamics</a>.</li></ul></li></ul>





<p>0:00 - Intro
6:32 - Background
10:41 - Holistic approach
18:19 - Importance of thalamus
35:19 - Thalamus circuitry
40:30 - Cerebellum
46:15 - Predictive processing
49:32 - Brain as dynamical attractor landscape
56:48 - System 1 and system 2
1:02:38 - How to think about the thalamus
1:06:45 - Causality in complex systems
1:11:09 - Clinical applications
1:15:02 - Ascending arousal system and neuromodulators
1:27:48 - Implications for AI
1:33:40 - Career serendipity
1:35:12 - Advice</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Mac and I discuss his systems level approach to understanding brains, and his theoretical work suggesting important roles for the thalamus, basal ganglia, and c]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Mac and I discuss his systems level approach to understanding brains, and his theoretical work suggesting important roles for the thalamus, basal ganglia, and cerebellum, shifting the dynamical landscape of brain function within varying behavioral contexts. We also discuss his recent interest in the ascending arousal system and neuromodulators. Mac thinks the neocortex has been the sole focus of too much neuroscience research, and that the subcortical brain regions and circuits have a much larger role underlying our intelligence.</p>



<ul class="wp-block-list"><li><a href="https://shine-lab.org/">Shine Lab</a></li><li>Twitter:&nbsp;<a href="https://twitter.com/jmacshine">@jmacshine</a></li><li>Related papers<ul><li><a href="https://shine-lab.org/wp-content/uploads/2021/09/2020_progneuro.pdf">The thalamus integrates the macrosystems of the brain to facilitate complex, adaptive brain network dynamics</a>.</li><li><a href="https://shine-lab.org/wp-content/uploads/2021/09/2021_natureneuro.pdf">Computational models link cellular mechanisms of neuromodulation to large-scale neural dynamics</a>.</li></ul></li></ul>





<p>0:00 - Intro
6:32 - Background
10:41 - Holistic approach
18:19 - Importance of thalamus
35:19 - Thalamus circuitry
40:30 - Cerebellum
46:15 - Predictive processing
49:32 - Brain as dynamical attractor landscape
56:48 - System 1 and system 2
1:02:38 - How to think about the thalamus
1:06:45 - Causality in complex systems
1:11:09 - Clinical applications
1:15:02 - Ascending arousal system and neuromodulators
1:27:48 - Implications for AI
1:33:40 - Career serendipity
1:35:12 - Advice</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1606/121.mp3" length="99378709" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Mac and I discuss his systems level approach to understanding brains, and his theoretical work suggesting important roles for the thalamus, basal ganglia, and cerebellum, shifting the dynamical landscape of brain function within varying behavioral contexts. We also discuss his recent interest in the ascending arousal system and neuromodulators. Mac thinks the neocortex has been the sole focus of too much neuroscience research, and that the subcortical brain regions and circuits have a much larger role underlying our intelligence.



Shine LabTwitter:&nbsp;@jmacshineRelated papersThe thalamus integrates the macrosystems of the brain to facilitate complex, adaptive brain network dynamics.Computational models link cellular mechanisms of neuromodulation to large-scale neural dynamics.





0:00 - Intro
6:32 - Background
10:41 - Holistic approach
18:19 - Importance of thalamus
35:19 - Thalamus circuitry
40:30 - Cerebellum
46:15 - Predictive processing
49:32 - Brain as dynamical attractor landscape
56:48 - System 1 and system 2
1:02:38 - How to think about the thalamus
1:06:45 - Causality in complex systems
1:11:09 - Clinical applications
1:15:02 - Ascending arousal system and neuromodulators
1:27:48 - Implications for AI
1:33:40 - Career serendipity
1:35:12 - Advice]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2021/12/art-121-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:43:12</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Mac and I discuss his systems level approach to understanding brains, and his theoretical work suggesting important roles for the thalamus, basal ganglia, and cerebellum, shifting the dynamical landscape of brain function within varying behavioral contexts. We also discuss his recent interest in the ascending arousal system and neuromodulators. Mac thinks the neocortex has been the sole focus of too much neuroscience research, and that the subcortical brain regions and circuits have a much larger role underlying our intelligence.



Shine LabTwitter:&nbsp;@jmacshineRelated papersThe thalamus integrates the macrosystems of the brain to facilitate complex, adaptive brain network dynamics.Computational models link cellular mechanisms of neuromodulation to large-scale neural dynamics.





0:00 - Intro
6:32 - Background
10:41 - Holistic approach
18:19 - Importance of thalamus
35:19 - Thalamus cir]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2021/12/art-121-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 120 James Fitzgerald, Andrew Saxe, Weinan Sun: Optimizing Memories</title>
	<link>https://braininspired.co/podcast/120/</link>
	<pubDate>Sun, 21 Nov 2021 21:47:33 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1601</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>James, Andrew, and Weinan discuss their recent theory about how the brain might use complementary learning systems to optimize our memories. The idea is that our hippocampus creates our episodic memories for individual events, full of particular details. And through a complementary process, slowly consolidates those memories within our neocortex through mechanisms like hippocampal replay. The new idea in their work suggests a way for the consolidated cortical memory to become optimized for generalization, something humans are known to be capable of but deep learning has yet to build. We discuss what their theory predicts about how the "correct" process depends on how much noise and variability there is in the learning environment, how their model solves this, and how it relates to our brain and behavior.</p>



<ul class="wp-block-list"><li>James' <a href="https://www.janelia.org/people/james-fitzgerald">Janelia page</a>.</li><li>Weinan's <a href="https://www.janelia.org/people/weinan-sun">Janelia page</a>.</li><li>Andrew's <a href="https://www.saxelab.org/people/andrewsaxe/">website</a>.</li><li>Twitter: <ul><li>Andrew: <a href="https://twitter.com/SaxeLab">@SaxeLab</a></li><li>Weinan: <a href="https://twitter.com/sunw37">@sunw37</a></li></ul></li><li>Paper we discuss:<ul><li><a href="https://www.biorxiv.org/content/10.1101/2021.10.13.463791v1">Organizing memories for generalization in complementary learning systems</a>.</li></ul></li><li>Andrew's previous episode: <a href="https://braininspired.co/podcast/52/">BI 052 Andrew Saxe: Deep Learning Theory</a></li></ul>





<p>0:00 - Intro
3:57 - Guest Intros
15:04 - Organizing memories for generalization
26:48 - Teacher, student, and notebook models
30:51 - Shallow linear networks
33:17 - How to optimize generalization
47:05 - Replay as a generalization regulator
54:57 - Whole greater than sum of its parts
1:05:37 - Unpredictability
1:10:41 - Heuristics
1:13:52 - Theoretical neuroscience for AI
1:29:42 - Current personal thinking</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









James, Andrew, and Weinan discuss their recent theory about how the brain might use complementary learning systems to optimize our memories. The idea is that ou]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>James, Andrew, and Weinan discuss their recent theory about how the brain might use complementary learning systems to optimize our memories. The idea is that our hippocampus creates our episodic memories for individual events, full of particular details. And through a complementary process, slowly consolidates those memories within our neocortex through mechanisms like hippocampal replay. The new idea in their work suggests a way for the consolidated cortical memory to become optimized for generalization, something humans are known to be capable of but deep learning has yet to build. We discuss what their theory predicts about how the "correct" process depends on how much noise and variability there is in the learning environment, how their model solves this, and how it relates to our brain and behavior.</p>



<ul class="wp-block-list"><li>James' <a href="https://www.janelia.org/people/james-fitzgerald">Janelia page</a>.</li><li>Weinan's <a href="https://www.janelia.org/people/weinan-sun">Janelia page</a>.</li><li>Andrew's <a href="https://www.saxelab.org/people/andrewsaxe/">website</a>.</li><li>Twitter: <ul><li>Andrew: <a href="https://twitter.com/SaxeLab">@SaxeLab</a></li><li>Weinan: <a href="https://twitter.com/sunw37">@sunw37</a></li></ul></li><li>Paper we discuss:<ul><li><a href="https://www.biorxiv.org/content/10.1101/2021.10.13.463791v1">Organizing memories for generalization in complementary learning systems</a>.</li></ul></li><li>Andrew's previous episode: <a href="https://braininspired.co/podcast/52/">BI 052 Andrew Saxe: Deep Learning Theory</a></li></ul>





<p>0:00 - Intro
3:57 - Guest Intros
15:04 - Organizing memories for generalization
26:48 - Teacher, student, and notebook models
30:51 - Shallow linear networks
33:17 - How to optimize generalization
47:05 - Replay as a generalization regulator
54:57 - Whole greater than sum of its parts
1:05:37 - Unpredictability
1:10:41 - Heuristics
1:13:52 - Theoretical neuroscience for AI
1:29:42 - Current personal thinking</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1601/120.mp3" length="96328657" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









James, Andrew, and Weinan discuss their recent theory about how the brain might use complementary learning systems to optimize our memories. The idea is that our hippocampus creates our episodic memories for individual events, full of particular details. And through a complementary process, slowly consolidates those memories within our neocortex through mechanisms like hippocampal replay. The new idea in their work suggests a way for the consolidated cortical memory to become optimized for generalization, something humans are known to be capable of but deep learning has yet to build. We discuss what their theory predicts about how the "correct" process depends on how much noise and variability there is in the learning environment, how their model solves this, and how it relates to our brain and behavior.



James' Janelia page.Weinan's Janelia page.Andrew's website.Twitter: Andrew: @SaxeLabWeinan: @sunw37Paper we discuss:Organizing memories for generalization in complementary learning systems.Andrew's previous episode: BI 052 Andrew Saxe: Deep Learning Theory





0:00 - Intro
3:57 - Guest Intros
15:04 - Organizing memories for generalization
26:48 - Teacher, student, and notebook models
30:51 - Shallow linear networks
33:17 - How to optimize generalization
47:05 - Replay as a generalization regulator
54:57 - Whole greater than sum of its parts
1:05:37 - Unpredictability
1:10:41 - Heuristics
1:13:52 - Theoretical neuroscience for AI
1:29:42 - Current personal thinking]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2021/11/art-120-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:40:02</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









James, Andrew, and Weinan discuss their recent theory about how the brain might use complementary learning systems to optimize our memories. The idea is that our hippocampus creates our episodic memories for individual events, full of particular details. And through a complementary process, slowly consolidates those memories within our neocortex through mechanisms like hippocampal replay. The new idea in their work suggests a way for the consolidated cortical memory to become optimized for generalization, something humans are known to be capable of but deep learning has yet to build. We discuss what their theory predicts about how the "correct" process depends on how much noise and variability there is in the learning environment, how their model solves this, and how it relates to our brain and behavior.



James' Janelia page.Weinan's Janelia page.Andrew's website.Twitter: Andrew: @SaxeLabWe]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2021/11/art-120-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 119 Henry Yin: The Crisis in Neuroscience</title>
	<link>https://braininspired.co/podcast/119/</link>
	<pubDate>Thu, 11 Nov 2021 17:56:33 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1510</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Henry and I discuss why he thinks neuroscience is in a crisis (in the <a href="https://www.amazon.com/gp/product/0226458121/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=0226458121&amp;linkId=c85752df380add9cf158431c769a8f2b">Thomas Kuhn</a> sense of scientific paradigms, crises, and revolutions). Henry thinks our current concept of the brain as an input-output device, with cognition in the middle, is mistaken. He points to the failure of neuroscience to successfully explain behavior despite decades of research. Instead, Henry proposes the brain is one big hierarchical set of control loops, trying to control their output with respect to internally generated reference signals. He was inspired by control theory, but points out that most control theory for biology is flawed by not recognizing that the reference signals are internally generated. Instead, most control theory approaches, and neuroscience research in general, assume the reference signals are what gets externally supplied... by the experimenter.</p>



<ul class="wp-block-list"><li><a href="https://www.neuro.duke.edu/research/faculty-labs/yin-lab">Yin lab</a> at Duke.</li><li>Twitter:&nbsp;<a href="https://twitter.com/HenryYin19">@HenryYin19</a>.</li><li>Related papers<ul><li><a href="https://www.researchgate.net/publication/341706555_The_crisis_in_neuroscience">The Crisis in Neuroscience</a>.</li><li><a href="https://www.researchgate.net/publication/277721485_Restoring_Purpose_in_Behavior">Restoring Purpose in Behavior</a>.</li><li><a href="https://www.sciencedirect.com/science/article/pii/S2589004221009160">Achieving natural behavior in a robot using neurally inspired hierarchical perceptual control</a>.</li></ul></li></ul>





<p>0:00 - Intro
5:40 - Kuhnian crises
9:32 - Control theory and cybernetics
17:23 - How much of brain is control system?
20:33 - Higher order control representation
23:18 - Prediction and control theory
27:36 - The way forward
31:52 - Compatibility with mental representation
38:29 - Teleology
45:53 - The right number of subjects
51:30 - Continuous measurement
57:06 - Artificial intelligence and control theory</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Henry and I discuss why he thinks neuroscience is in a crisis (in the Thomas Kuhn sense of scientific paradigms, crises, and revolutions). Henry thinks our curr]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Henry and I discuss why he thinks neuroscience is in a crisis (in the <a href="https://www.amazon.com/gp/product/0226458121/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=0226458121&amp;linkId=c85752df380add9cf158431c769a8f2b">Thomas Kuhn</a> sense of scientific paradigms, crises, and revolutions). Henry thinks our current concept of the brain as an input-output device, with cognition in the middle, is mistaken. He points to the failure of neuroscience to successfully explain behavior despite decades of research. Instead, Henry proposes the brain is one big hierarchical set of control loops, trying to control their output with respect to internally generated reference signals. He was inspired by control theory, but points out that most control theory for biology is flawed by not recognizing that the reference signals are internally generated. Instead, most control theory approaches, and neuroscience research in general, assume the reference signals are what gets externally supplied... by the experimenter.</p>



<ul class="wp-block-list"><li><a href="https://www.neuro.duke.edu/research/faculty-labs/yin-lab">Yin lab</a> at Duke.</li><li>Twitter:&nbsp;<a href="https://twitter.com/HenryYin19">@HenryYin19</a>.</li><li>Related papers<ul><li><a href="https://www.researchgate.net/publication/341706555_The_crisis_in_neuroscience">The Crisis in Neuroscience</a>.</li><li><a href="https://www.researchgate.net/publication/277721485_Restoring_Purpose_in_Behavior">Restoring Purpose in Behavior</a>.</li><li><a href="https://www.sciencedirect.com/science/article/pii/S2589004221009160">Achieving natural behavior in a robot using neurally inspired hierarchical perceptual control</a>.</li></ul></li></ul>





<p>0:00 - Intro
5:40 - Kuhnian crises
9:32 - Control theory and cybernetics
17:23 - How much of brain is control system?
20:33 - Higher order control representation
23:18 - Prediction and control theory
27:36 - The way forward
31:52 - Compatibility with mental representation
38:29 - Teleology
45:53 - The right number of subjects
51:30 - Continuous measurement
57:06 - Artificial intelligence and control theory</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1510/119.mp3" length="64235208" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Henry and I discuss why he thinks neuroscience is in a crisis (in the Thomas Kuhn sense of scientific paradigms, crises, and revolutions). Henry thinks our current concept of the brain as an input-output device, with cognition in the middle, is mistaken. He points to the failure of neuroscience to successfully explain behavior despite decades of research. Instead, Henry proposes the brain is one big hierarchical set of control loops, trying to control their output with respect to internally generated reference signals. He was inspired by control theory, but points out that most control theory for biology is flawed by not recognizing that the reference signals are internally generated. Instead, most control theory approaches, and neuroscience research in general, assume the reference signals are what gets externally supplied... by the experimenter.



Yin lab at Duke.Twitter:&nbsp;@HenryYin19.Related papersThe Crisis in Neuroscience.Restoring Purpose in Behavior.Achieving natural behavior in a robot using neurally inspired hierarchical perceptual control.





0:00 - Intro
5:40 - Kuhnian crises
9:32 - Control theory and cybernetics
17:23 - How much of brain is control system?
20:33 - Higher order control representation
23:18 - Prediction and control theory
27:36 - The way forward
31:52 - Compatibility with mental representation
38:29 - Teleology
45:53 - The right number of subjects
51:30 - Continuous measurement
57:06 - Artificial intelligence and control theory]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2021/11/art-119-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:06:36</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Henry and I discuss why he thinks neuroscience is in a crisis (in the Thomas Kuhn sense of scientific paradigms, crises, and revolutions). Henry thinks our current concept of the brain as an input-output device, with cognition in the middle, is mistaken. He points to the failure of neuroscience to successfully explain behavior despite decades of research. Instead, Henry proposes the brain is one big hierarchical set of control loops, trying to control their output with respect to internally generated reference signals. He was inspired by control theory, but points out that most control theory for biology is flawed by not recognizing that the reference signals are internally generated. Instead, most control theory approaches, and neuroscience research in general, assume the reference signals are what gets externally supplied... by the experimenter.



Yin lab at Duke.Twitter:&nbsp;@HenryYin19.]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2021/11/art-119-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 118 Johannes Jäger: Beyond Networks</title>
	<link>https://braininspired.co/podcast/118/</link>
	<pubDate>Mon, 01 Nov 2021 16:59:37 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1467</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Johannes (Yogi) is a freelance philosopher, researcher &amp; educator. We discuss many of the topics in his online course, <a href="https://www.youtube.com/playlist?list=PL8vh-kVsYPqOKJOboONJIQBd8ds0ueM_W">Beyond Networks: The Evolution of Living Systems</a>. The course is focused on the role of agency in evolution, but it covers a vast range of topics: process vs. substance metaphysics, causality, mechanistic dynamic explanation, teleology, the important role of development mediating genotypes, phenotypes, and evolution, what makes biological organisms unique, the history of evolutionary theory, scientific perspectivism, and a view toward the necessity of including agency in evolutionary theory. I highly recommend taking his course. We also discuss the role of agency in artificial intelligence, how neuroscience and evolutionary theory are undergoing parallel re-evaluations, and Yogi answers a guest question from <a href="https://braininspired.co/podcast/111/">Kevin Mitchell</a>.</p>



<ul class="wp-block-list"><li>Yogi's website and blog: <a href="https://www.johannesjaeger.eu/">Untethered in the Platonic Realm</a>.</li><li>Twitter:&nbsp;<a href="https://twitter.com/yoginho">@yoginho</a>.</li><li>His youtube course: <a href="https://www.youtube.com/playlist?list=PL8vh-kVsYPqOKJOboONJIQBd8ds0ueM_W">Beyond Networks: The Evolution of Living Systems</a>.</li><li>Kevin Mitchell's previous episode: <a href="https://braininspired.co/podcast/111/">BI 111 Kevin Mitchell and Erik Hoel: Agency, Emergence, Consciousness</a>.</li></ul>





<p>0:00 - Intro
4:10 - Yogi's background
11:00 - Beyond Networks - limits of dynamical systems models
16:53 - Kevin Mitchell question
20:12 - Process metaphysics
26:13 - Agency in evolution
40:37 - Agent-environment interaction, open-endedness
45:30 - AI and agency
55:40 - Life and intelligence
59:08 - Deep learning and neuroscience
1:03:21 - Mental autonomy
1:06:10 - William Wimsatt's biopsychological thicket
1:11:23 - Limtiations of mechanistic dynamic explanation
1:18:53 - Synthesis versus multi-perspectivism
1:30:31 - Specialization versus generalization</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Johannes (Yogi) is a freelance philosopher, researcher &amp; educator. We discuss many of the topics in his online course, Beyond Networks: The Evolution of Liv]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Johannes (Yogi) is a freelance philosopher, researcher &amp; educator. We discuss many of the topics in his online course, <a href="https://www.youtube.com/playlist?list=PL8vh-kVsYPqOKJOboONJIQBd8ds0ueM_W">Beyond Networks: The Evolution of Living Systems</a>. The course is focused on the role of agency in evolution, but it covers a vast range of topics: process vs. substance metaphysics, causality, mechanistic dynamic explanation, teleology, the important role of development mediating genotypes, phenotypes, and evolution, what makes biological organisms unique, the history of evolutionary theory, scientific perspectivism, and a view toward the necessity of including agency in evolutionary theory. I highly recommend taking his course. We also discuss the role of agency in artificial intelligence, how neuroscience and evolutionary theory are undergoing parallel re-evaluations, and Yogi answers a guest question from <a href="https://braininspired.co/podcast/111/">Kevin Mitchell</a>.</p>



<ul class="wp-block-list"><li>Yogi's website and blog: <a href="https://www.johannesjaeger.eu/">Untethered in the Platonic Realm</a>.</li><li>Twitter:&nbsp;<a href="https://twitter.com/yoginho">@yoginho</a>.</li><li>His youtube course: <a href="https://www.youtube.com/playlist?list=PL8vh-kVsYPqOKJOboONJIQBd8ds0ueM_W">Beyond Networks: The Evolution of Living Systems</a>.</li><li>Kevin Mitchell's previous episode: <a href="https://braininspired.co/podcast/111/">BI 111 Kevin Mitchell and Erik Hoel: Agency, Emergence, Consciousness</a>.</li></ul>





<p>0:00 - Intro
4:10 - Yogi's background
11:00 - Beyond Networks - limits of dynamical systems models
16:53 - Kevin Mitchell question
20:12 - Process metaphysics
26:13 - Agency in evolution
40:37 - Agent-environment interaction, open-endedness
45:30 - AI and agency
55:40 - Life and intelligence
59:08 - Deep learning and neuroscience
1:03:21 - Mental autonomy
1:06:10 - William Wimsatt's biopsychological thicket
1:11:23 - Limtiations of mechanistic dynamic explanation
1:18:53 - Synthesis versus multi-perspectivism
1:30:31 - Specialization versus generalization</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1467/118.mp3" length="92592437" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Johannes (Yogi) is a freelance philosopher, researcher &amp; educator. We discuss many of the topics in his online course, Beyond Networks: The Evolution of Living Systems. The course is focused on the role of agency in evolution, but it covers a vast range of topics: process vs. substance metaphysics, causality, mechanistic dynamic explanation, teleology, the important role of development mediating genotypes, phenotypes, and evolution, what makes biological organisms unique, the history of evolutionary theory, scientific perspectivism, and a view toward the necessity of including agency in evolutionary theory. I highly recommend taking his course. We also discuss the role of agency in artificial intelligence, how neuroscience and evolutionary theory are undergoing parallel re-evaluations, and Yogi answers a guest question from Kevin Mitchell.



Yogi's website and blog: Untethered in the Platonic Realm.Twitter:&nbsp;@yoginho.His youtube course: Beyond Networks: The Evolution of Living Systems.Kevin Mitchell's previous episode: BI 111 Kevin Mitchell and Erik Hoel: Agency, Emergence, Consciousness.





0:00 - Intro
4:10 - Yogi's background
11:00 - Beyond Networks - limits of dynamical systems models
16:53 - Kevin Mitchell question
20:12 - Process metaphysics
26:13 - Agency in evolution
40:37 - Agent-environment interaction, open-endedness
45:30 - AI and agency
55:40 - Life and intelligence
59:08 - Deep learning and neuroscience
1:03:21 - Mental autonomy
1:06:10 - William Wimsatt's biopsychological thicket
1:11:23 - Limtiations of mechanistic dynamic explanation
1:18:53 - Synthesis versus multi-perspectivism
1:30:31 - Specialization versus generalization]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2021/10/art-118-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:36:08</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Johannes (Yogi) is a freelance philosopher, researcher &amp; educator. We discuss many of the topics in his online course, Beyond Networks: The Evolution of Living Systems. The course is focused on the role of agency in evolution, but it covers a vast range of topics: process vs. substance metaphysics, causality, mechanistic dynamic explanation, teleology, the important role of development mediating genotypes, phenotypes, and evolution, what makes biological organisms unique, the history of evolutionary theory, scientific perspectivism, and a view toward the necessity of including agency in evolutionary theory. I highly recommend taking his course. We also discuss the role of agency in artificial intelligence, how neuroscience and evolutionary theory are undergoing parallel re-evaluations, and Yogi answers a guest question from Kevin Mitchell.



Yogi's website and blog: Untethered in the Pla]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2021/10/art-118-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 117 Anil Seth: Being You</title>
	<link>https://braininspired.co/podcast/117/</link>
	<pubDate>Tue, 19 Oct 2021 17:33:04 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1455</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Anil and I discuss a range of topics from his book, <a href="https://www.penguinrandomhouse.com/books/566315/being-you-by-anil-seth/#">BEING YOU A New Science of Consciousness</a>. Anil lays out his framework for explaining consciousness, which is embedded in what he calls the "real problem" of consciousness. You know the "hard problem", which was David Chalmers term for our eternal difficulties to explain why we have subjective awareness at all instead of being unfeeling, unexperiencing machine-like organisms. Anil's "real problem" aims to explain, predict, and control the phenomenal properties of consciousness, and his hope is that, by doing so, the hard problem of consciousness will dissolve much like the mystery of explaining life dissolved with lots of good science.
</p>





<p>Anil's account of perceptual consciousness, like seeing red, is that it's rooted in predicting our incoming sensory data. His account of our sense of self,  is that it's rooted in predicting our bodily states to control them.
</p>



<p>We talk about that and a lot of other topics from the book, like consciousness as "controlled hallucinations", free will, psychedelics, complexity and emergence, and the relation between life, intelligence, and consciousness. Plus, Anil answers a handful of questions from Megan Peters and Steve Fleming, both previous brain inspired guests.</p>



<ul class="wp-block-list"><li><a href="https://www.anilseth.com/bio/">Anil's website</a>.</li><li>Twitter: <a href="https://twitter.com/anilkseth">@anilkseth</a>.</li><li>Anil's book: <a href="https://www.penguinrandomhouse.com/books/566315/being-you-by-anil-seth/#">BEING YOU A New Science of Consciousness</a>.</li><li>Megan's previous episode:<ul><li><a href="https://braininspired.co/podcast/73/">BI 073 Megan Peters: Consciousness and Metacognition</a></li></ul></li><li>Steve's previous episodes<ul><li><a href="https://braininspired.co/podcast/99/">BI 099 Hakwan Lau and Steve Fleming: Neuro-AI Consciousness</a></li><li><a href="https://braininspired.co/podcast/107/">BI 107 Steve Fleming: Know Thyself</a></li></ul></li></ul>





<p>0:00 - Intro
6:32 - Megan Peters Q: Communicating Consciousness
15:58 - Human vs. animal consciousness
19:12 - BEING YOU A New Science of Consciousness
20:55 - Megan Peters Q: Will the hard problem go away?
30:55 - Steve Fleming Q: Contents of consciousness
41:01 - Megan Peters Q: Phenomenal character vs. content
43:46 - Megan Peters Q: Lempels of complexity
52:00 - Complex systems and emergence
55:53 - Psychedelics
1:06:04 - Free will
1:19:10 - Consciousness vs. life vs. intelligence</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Anil and I discuss a range of topics from his book, BEING YOU A New Science of Consciousness. Anil lays out his framework for explaining consciousness, which is]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Anil and I discuss a range of topics from his book, <a href="https://www.penguinrandomhouse.com/books/566315/being-you-by-anil-seth/#">BEING YOU A New Science of Consciousness</a>. Anil lays out his framework for explaining consciousness, which is embedded in what he calls the "real problem" of consciousness. You know the "hard problem", which was David Chalmers term for our eternal difficulties to explain why we have subjective awareness at all instead of being unfeeling, unexperiencing machine-like organisms. Anil's "real problem" aims to explain, predict, and control the phenomenal properties of consciousness, and his hope is that, by doing so, the hard problem of consciousness will dissolve much like the mystery of explaining life dissolved with lots of good science.
</p>





<p>Anil's account of perceptual consciousness, like seeing red, is that it's rooted in predicting our incoming sensory data. His account of our sense of self,  is that it's rooted in predicting our bodily states to control them.
</p>



<p>We talk about that and a lot of other topics from the book, like consciousness as "controlled hallucinations", free will, psychedelics, complexity and emergence, and the relation between life, intelligence, and consciousness. Plus, Anil answers a handful of questions from Megan Peters and Steve Fleming, both previous brain inspired guests.</p>



<ul class="wp-block-list"><li><a href="https://www.anilseth.com/bio/">Anil's website</a>.</li><li>Twitter: <a href="https://twitter.com/anilkseth">@anilkseth</a>.</li><li>Anil's book: <a href="https://www.penguinrandomhouse.com/books/566315/being-you-by-anil-seth/#">BEING YOU A New Science of Consciousness</a>.</li><li>Megan's previous episode:<ul><li><a href="https://braininspired.co/podcast/73/">BI 073 Megan Peters: Consciousness and Metacognition</a></li></ul></li><li>Steve's previous episodes<ul><li><a href="https://braininspired.co/podcast/99/">BI 099 Hakwan Lau and Steve Fleming: Neuro-AI Consciousness</a></li><li><a href="https://braininspired.co/podcast/107/">BI 107 Steve Fleming: Know Thyself</a></li></ul></li></ul>





<p>0:00 - Intro
6:32 - Megan Peters Q: Communicating Consciousness
15:58 - Human vs. animal consciousness
19:12 - BEING YOU A New Science of Consciousness
20:55 - Megan Peters Q: Will the hard problem go away?
30:55 - Steve Fleming Q: Contents of consciousness
41:01 - Megan Peters Q: Phenomenal character vs. content
43:46 - Megan Peters Q: Lempels of complexity
52:00 - Complex systems and emergence
55:53 - Psychedelics
1:06:04 - Free will
1:19:10 - Consciousness vs. life vs. intelligence</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1455/117.mp3" length="88766181" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Anil and I discuss a range of topics from his book, BEING YOU A New Science of Consciousness. Anil lays out his framework for explaining consciousness, which is embedded in what he calls the "real problem" of consciousness. You know the "hard problem", which was David Chalmers term for our eternal difficulties to explain why we have subjective awareness at all instead of being unfeeling, unexperiencing machine-like organisms. Anil's "real problem" aims to explain, predict, and control the phenomenal properties of consciousness, and his hope is that, by doing so, the hard problem of consciousness will dissolve much like the mystery of explaining life dissolved with lots of good science.






Anil's account of perceptual consciousness, like seeing red, is that it's rooted in predicting our incoming sensory data. His account of our sense of self,  is that it's rooted in predicting our bodily states to control them.




We talk about that and a lot of other topics from the book, like consciousness as "controlled hallucinations", free will, psychedelics, complexity and emergence, and the relation between life, intelligence, and consciousness. Plus, Anil answers a handful of questions from Megan Peters and Steve Fleming, both previous brain inspired guests.



Anil's website.Twitter: @anilkseth.Anil's book: BEING YOU A New Science of Consciousness.Megan's previous episode:BI 073 Megan Peters: Consciousness and MetacognitionSteve's previous episodesBI 099 Hakwan Lau and Steve Fleming: Neuro-AI ConsciousnessBI 107 Steve Fleming: Know Thyself





0:00 - Intro
6:32 - Megan Peters Q: Communicating Consciousness
15:58 - Human vs. animal consciousness
19:12 - BEING YOU A New Science of Consciousness
20:55 - Megan Peters Q: Will the hard problem go away?
30:55 - Steve Fleming Q: Contents of consciousness
41:01 - Megan Peters Q: Phenomenal character vs. content
43:46 - Megan Peters Q: Lempels of complexity
52:00 - Complex systems and emergence
55:53 - Psychedelics
1:06:04 - Free will
1:19:10 - Consciousness vs. life vs. intelligence]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2021/10/art-117-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:32:09</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Anil and I discuss a range of topics from his book, BEING YOU A New Science of Consciousness. Anil lays out his framework for explaining consciousness, which is embedded in what he calls the "real problem" of consciousness. You know the "hard problem", which was David Chalmers term for our eternal difficulties to explain why we have subjective awareness at all instead of being unfeeling, unexperiencing machine-like organisms. Anil's "real problem" aims to explain, predict, and control the phenomenal properties of consciousness, and his hope is that, by doing so, the hard problem of consciousness will dissolve much like the mystery of explaining life dissolved with lots of good science.






Anil's account of perceptual consciousness, like seeing red, is that it's rooted in predicting our incoming sensory data. His account of our sense of self,  is that it's rooted in predicting our bodily st]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2021/10/art-117-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 116 Michael W. Cole: Empirical Neural Networks</title>
	<link>https://braininspired.co/podcast/116/</link>
	<pubDate>Tue, 12 Oct 2021 16:36:10 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1450</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>







<p>Mike and I discuss his modeling approach to study cognition. Many people I have on the podcast use deep neural networks to study brains, where the idea is to train or optimize the model to perform a task, then compare the model properties with brain properties. Mike's approach is different in at least two ways. One, he builds the architecture of his models using connectivity data from fMRI recordings. Two, he doesn't train his models; instead, he uses functional connectivity data from the fMRI recordings to assign weights between nodes of the network (in deep learning, the weights are learned through lots of training). Mike calls his networks empirically-estimated neural networks (ENNs), and/or network coding models. We walk through his approach, what we can learn from models like ENNs, discuss some of his earlier work on cognitive control and our ability to flexibly adapt to new task rules through instruction, and he fields questions from <a rel="noreferrer noopener" href="https://braininspired.co/podcast/54/" target="_blank">Kanaka Rajan</a>, <a rel="noreferrer noopener" href="https://braininspired.co/podcast/26/" target="_blank">Kendrick Kay</a>, and Patryk Laurent.</p>



<ul class="wp-block-list"><li><a rel="noreferrer noopener" href="https://www.colelab.org/#" target="_blank">The Cole Neurocognition lab</a>.</li><li>Twitter:&nbsp;<a href="https://twitter.com/TheColeLab">@TheColeLab</a>.</li><li>Related papers<ul><li><a href="https://www.colelab.org/pubs/2019_ItoHearne_TiCS.pdf">Discovering the Computational Relevance of Brain Network Organization</a>.</li><li><a href="https://doi.org/10.1101/2020.12.24.424353">Constructing neural network models from brain data reveals representational transformation underlying adaptive behavior.</a></li></ul></li><li>Kendrick Kay's previous episode: <a href="https://braininspired.co/?s=kendrick">BI 026 Kendrick Kay: A Model By Any Other Name</a>.</li><li>Kanaka Rajan's previous episode: <a rel="noreferrer noopener" href="https://braininspired.co/podcast/54/" target="_blank">BI 054 Kanaka Rajan: How Do We Switch Behaviors?</a></li></ul>





<p>0:00 - Intro
4:58 - Cognitive control
7:44 - Rapid Instructed Task Learning and Flexible Hub Theory
15:53 - Patryk Laurent question: free will
26:21 - Kendrick Kay question: fMRI limitations
31:55 - Empirically-estimated neural networks (ENNs)
40:51 - ENNs vs. deep learning
45:30 - Clinical relevance of ENNs
47:32 - Kanaka Rajan question: a proposed collaboration
56:38 - Advantage of modeling multiple regions
1:05:30 - How ENNs work
1:12:48 - How ENNs might benefit artificial intelligence
1:19:04 - The need for causality
1:24:38 - Importance of luck and serendipity</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Mike and I discuss his modeling approach to study cognition. Many people I have on the podcast use deep neural networks to study brains, where the idea is to trai]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>







<p>Mike and I discuss his modeling approach to study cognition. Many people I have on the podcast use deep neural networks to study brains, where the idea is to train or optimize the model to perform a task, then compare the model properties with brain properties. Mike's approach is different in at least two ways. One, he builds the architecture of his models using connectivity data from fMRI recordings. Two, he doesn't train his models; instead, he uses functional connectivity data from the fMRI recordings to assign weights between nodes of the network (in deep learning, the weights are learned through lots of training). Mike calls his networks empirically-estimated neural networks (ENNs), and/or network coding models. We walk through his approach, what we can learn from models like ENNs, discuss some of his earlier work on cognitive control and our ability to flexibly adapt to new task rules through instruction, and he fields questions from <a rel="noreferrer noopener" href="https://braininspired.co/podcast/54/" target="_blank">Kanaka Rajan</a>, <a rel="noreferrer noopener" href="https://braininspired.co/podcast/26/" target="_blank">Kendrick Kay</a>, and Patryk Laurent.</p>



<ul class="wp-block-list"><li><a rel="noreferrer noopener" href="https://www.colelab.org/#" target="_blank">The Cole Neurocognition lab</a>.</li><li>Twitter:&nbsp;<a href="https://twitter.com/TheColeLab">@TheColeLab</a>.</li><li>Related papers<ul><li><a href="https://www.colelab.org/pubs/2019_ItoHearne_TiCS.pdf">Discovering the Computational Relevance of Brain Network Organization</a>.</li><li><a href="https://doi.org/10.1101/2020.12.24.424353">Constructing neural network models from brain data reveals representational transformation underlying adaptive behavior.</a></li></ul></li><li>Kendrick Kay's previous episode: <a href="https://braininspired.co/?s=kendrick">BI 026 Kendrick Kay: A Model By Any Other Name</a>.</li><li>Kanaka Rajan's previous episode: <a rel="noreferrer noopener" href="https://braininspired.co/podcast/54/" target="_blank">BI 054 Kanaka Rajan: How Do We Switch Behaviors?</a></li></ul>





<p>0:00 - Intro
4:58 - Cognitive control
7:44 - Rapid Instructed Task Learning and Flexible Hub Theory
15:53 - Patryk Laurent question: free will
26:21 - Kendrick Kay question: fMRI limitations
31:55 - Empirically-estimated neural networks (ENNs)
40:51 - ENNs vs. deep learning
45:30 - Clinical relevance of ENNs
47:32 - Kanaka Rajan question: a proposed collaboration
56:38 - Advantage of modeling multiple regions
1:05:30 - How ENNs work
1:12:48 - How ENNs might benefit artificial intelligence
1:19:04 - The need for causality
1:24:38 - Importance of luck and serendipity</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1450/116.mp3" length="87991556" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Mike and I discuss his modeling approach to study cognition. Many people I have on the podcast use deep neural networks to study brains, where the idea is to train or optimize the model to perform a task, then compare the model properties with brain properties. Mike's approach is different in at least two ways. One, he builds the architecture of his models using connectivity data from fMRI recordings. Two, he doesn't train his models; instead, he uses functional connectivity data from the fMRI recordings to assign weights between nodes of the network (in deep learning, the weights are learned through lots of training). Mike calls his networks empirically-estimated neural networks (ENNs), and/or network coding models. We walk through his approach, what we can learn from models like ENNs, discuss some of his earlier work on cognitive control and our ability to flexibly adapt to new task rules through instruction, and he fields questions from Kanaka Rajan, Kendrick Kay, and Patryk Laurent.



The Cole Neurocognition lab.Twitter:&nbsp;@TheColeLab.Related papersDiscovering the Computational Relevance of Brain Network Organization.Constructing neural network models from brain data reveals representational transformation underlying adaptive behavior.Kendrick Kay's previous episode: BI 026 Kendrick Kay: A Model By Any Other Name.Kanaka Rajan's previous episode: BI 054 Kanaka Rajan: How Do We Switch Behaviors?





0:00 - Intro
4:58 - Cognitive control
7:44 - Rapid Instructed Task Learning and Flexible Hub Theory
15:53 - Patryk Laurent question: free will
26:21 - Kendrick Kay question: fMRI limitations
31:55 - Empirically-estimated neural networks (ENNs)
40:51 - ENNs vs. deep learning
45:30 - Clinical relevance of ENNs
47:32 - Kanaka Rajan question: a proposed collaboration
56:38 - Advantage of modeling multiple regions
1:05:30 - How ENNs work
1:12:48 - How ENNs might benefit artificial intelligence
1:19:04 - The need for causality
1:24:38 - Importance of luck and serendipity]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2021/10/art-116-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:31:20</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Mike and I discuss his modeling approach to study cognition. Many people I have on the podcast use deep neural networks to study brains, where the idea is to train or optimize the model to perform a task, then compare the model properties with brain properties. Mike's approach is different in at least two ways. One, he builds the architecture of his models using connectivity data from fMRI recordings. Two, he doesn't train his models; instead, he uses functional connectivity data from the fMRI recordings to assign weights between nodes of the network (in deep learning, the weights are learned through lots of training). Mike calls his networks empirically-estimated neural networks (ENNs), and/or network coding models. We walk through his approach, what we can learn from models like ENNs, discuss some of his earlier work on cognitive control and our ability to flexibly adapt to new task rules thr]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2021/10/art-116-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 115 Steve Grossberg: Conscious Mind, Resonant Brain</title>
	<link>https://braininspired.co/podcast/115/</link>
	<pubDate>Sat, 02 Oct 2021 20:36:51 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1443</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Steve and I discuss his book <a href="https://www.amazon.com/gp/product/B094W6BBKN/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=B094W6BBKN&amp;linkId=edb32e098e6dba02c0471d73d8b39e64">Conscious Mind, Resonant Brain: How Each Brain Makes a Mind</a>.&nbsp; The book is a huge collection of his models and their predictions and explanations for a wide array of cognitive brain functions. Many of the models spring from his Adaptive Resonance Theory (ART) framework, which explains how networks of neurons deal with changing environments while maintaining self-organization and retaining learned knowledge. ART led Steve to the hypothesis that all conscious states are resonant states, which we discuss. There are also guest questions from <a href="https://braininspired.co/podcast/84/">György Buzsáki</a>, <a href="https://braininspired.co/podcast/30/">Jay McClelland</a>, and <a href="https://braininspired.co/podcast/113/">John Krakauer</a>.</p>





<ul class="wp-block-list"><li>Steve's <a href="https://sites.bu.edu/steveg/">BU website</a>.</li><li><a href="https://www.amazon.com/gp/product/B094W6BBKN/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=B094W6BBKN&amp;linkId=edb32e098e6dba02c0471d73d8b39e64">Conscious Mind, Resonant Brain: How Each Brain Makes a Mind</a></li><li>Previous Brain Inspired episode:<ul><li><a href="https://braininspired.co/podcast/82/">BI 082 Steve Grossberg: Adaptive Resonance Theory</a></li></ul></li></ul>





<p>0:00 - Intro
2:38 - Conscious Mind, Resonant Brain
11:49 - Theoretical method
15:54 - ART, learning, and consciousness
22:58 - Conscious vs. unconscious resonance
26:56 - Györy Buzsáki question
30:04 - Remaining mysteries in visual system
35:16 - John Krakauer question
39:12 - Jay McClelland question
51:34 - Any missing principles to explain human cognition?
1:00:16 - Importance of an early good career start
1:06:50 - Has modeling training caught up to experiment training?
1:17:12 - Universal development code</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Steve and I discuss his book Conscious Mind, Resonant Brain: How Each Brain Makes a Mind.&nbsp; The book is a huge collection of his models and their prediction]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>



<a href="https://www.patreon.com/braininspired"></a>





<p>Steve and I discuss his book <a href="https://www.amazon.com/gp/product/B094W6BBKN/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=B094W6BBKN&amp;linkId=edb32e098e6dba02c0471d73d8b39e64">Conscious Mind, Resonant Brain: How Each Brain Makes a Mind</a>.&nbsp; The book is a huge collection of his models and their predictions and explanations for a wide array of cognitive brain functions. Many of the models spring from his Adaptive Resonance Theory (ART) framework, which explains how networks of neurons deal with changing environments while maintaining self-organization and retaining learned knowledge. ART led Steve to the hypothesis that all conscious states are resonant states, which we discuss. There are also guest questions from <a href="https://braininspired.co/podcast/84/">György Buzsáki</a>, <a href="https://braininspired.co/podcast/30/">Jay McClelland</a>, and <a href="https://braininspired.co/podcast/113/">John Krakauer</a>.</p>





<ul class="wp-block-list"><li>Steve's <a href="https://sites.bu.edu/steveg/">BU website</a>.</li><li><a href="https://www.amazon.com/gp/product/B094W6BBKN/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=B094W6BBKN&amp;linkId=edb32e098e6dba02c0471d73d8b39e64">Conscious Mind, Resonant Brain: How Each Brain Makes a Mind</a></li><li>Previous Brain Inspired episode:<ul><li><a href="https://braininspired.co/podcast/82/">BI 082 Steve Grossberg: Adaptive Resonance Theory</a></li></ul></li></ul>





<p>0:00 - Intro
2:38 - Conscious Mind, Resonant Brain
11:49 - Theoretical method
15:54 - ART, learning, and consciousness
22:58 - Conscious vs. unconscious resonance
26:56 - Györy Buzsáki question
30:04 - Remaining mysteries in visual system
35:16 - John Krakauer question
39:12 - Jay McClelland question
51:34 - Any missing principles to explain human cognition?
1:00:16 - Importance of an early good career start
1:06:50 - Has modeling training caught up to experiment training?
1:17:12 - Universal development code</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1443/115.mp3" length="80639936" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Steve and I discuss his book Conscious Mind, Resonant Brain: How Each Brain Makes a Mind.&nbsp; The book is a huge collection of his models and their predictions and explanations for a wide array of cognitive brain functions. Many of the models spring from his Adaptive Resonance Theory (ART) framework, which explains how networks of neurons deal with changing environments while maintaining self-organization and retaining learned knowledge. ART led Steve to the hypothesis that all conscious states are resonant states, which we discuss. There are also guest questions from György Buzsáki, Jay McClelland, and John Krakauer.





Steve's BU website.Conscious Mind, Resonant Brain: How Each Brain Makes a MindPrevious Brain Inspired episode:BI 082 Steve Grossberg: Adaptive Resonance Theory





0:00 - Intro
2:38 - Conscious Mind, Resonant Brain
11:49 - Theoretical method
15:54 - ART, learning, and consciousness
22:58 - Conscious vs. unconscious resonance
26:56 - Györy Buzsáki question
30:04 - Remaining mysteries in visual system
35:16 - John Krakauer question
39:12 - Jay McClelland question
51:34 - Any missing principles to explain human cognition?
1:00:16 - Importance of an early good career start
1:06:50 - Has modeling training caught up to experiment training?
1:17:12 - Universal development code]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2021/09/art-115-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:23:41</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.









Steve and I discuss his book Conscious Mind, Resonant Brain: How Each Brain Makes a Mind.&nbsp; The book is a huge collection of his models and their predictions and explanations for a wide array of cognitive brain functions. Many of the models spring from his Adaptive Resonance Theory (ART) framework, which explains how networks of neurons deal with changing environments while maintaining self-organization and retaining learned knowledge. ART led Steve to the hypothesis that all conscious states are resonant states, which we discuss. There are also guest questions from György Buzsáki, Jay McClelland, and John Krakauer.





Steve's BU website.Conscious Mind, Resonant Brain: How Each Brain Makes a MindPrevious Brain Inspired episode:BI 082 Steve Grossberg: Adaptive Resonance Theory





0:00 - Intro
2:38 - Conscious Mind, Resonant Brain
11:49 - Theoretical method
15:54 - ART, learning, and co]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2021/09/art-115-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 114 Mark Sprevak and Mazviita Chirimuuta: Computation and the Mind</title>
	<link>https://braininspired.co/podcast/114/</link>
	<pubDate>Wed, 22 Sep 2021 16:15:39 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1437</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>







<p>Mark and Mazviita discuss the philosophy and science of mind, and how to think about computations with respect to understanding minds. Current approaches to explaining brain function are dominated by computational models and the computer metaphor for brain and mind. But there are alternative ways to think about the relation between computations and brain function, which we explore in the discussion. We also talk about the role of philosophy broadly and with respect to mind sciences, pluralism and perspectival approaches to truth and understanding, the prospects and desirability of naturalizing representations (accounting for how brain representations relate to the natural world), and much more.</p>





<ul class="wp-block-list"><li><a href="https://marksprevak.com/">Mark's website</a>.</li><li><a href="https://www.ed.ac.uk/profile/mazviita-chirimuuta">Mazviita's University of Edinburgh page</a>.</li><li>Twitter (Mark): <a href="https://twitter.com/msprevak">@msprevak</a>.</li><li>Mazviita's previous Brain Inspired episode:<ul><li><a href="https://braininspired.co/podcast/72/">BI 072 Mazviita Chirimuuta: Understanding, Prediction, and Reality</a></li></ul></li><li>The related book we discuss:<ul><li><a href="https://www.amazon.com/gp/product/0367733668/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=0367733668&amp;linkId=780824e84b72c8a7c58ce501b97bbce8">The Routledge Handbook of the Computational Mind 2018 Mark Sprevak Matteo Colombo (Editors)</a></li></ul></li></ul>



<p>0:00 - Intro
5:26 - Philosophy contributing to mind science
15:45 - Trend toward hyperspecialization
21:38 - Practice-focused philosophy of science
30:42 - Computationalism
33:05 - Philosophy of mind: identity theory, functionalism
38:18 - Computations as descriptions
41:27 - Pluralism and perspectivalism
54:18 - How much of brain function is computation?
1:02:11 - AI as computationalism
1:13:28 - Naturalizing representations
1:30:08 - Are you doing it right?</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Mark and Mazviita discuss the philosophy and science of mind, and how to think about computations with respect to understanding minds. Current approaches to expla]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>







<p>Mark and Mazviita discuss the philosophy and science of mind, and how to think about computations with respect to understanding minds. Current approaches to explaining brain function are dominated by computational models and the computer metaphor for brain and mind. But there are alternative ways to think about the relation between computations and brain function, which we explore in the discussion. We also talk about the role of philosophy broadly and with respect to mind sciences, pluralism and perspectival approaches to truth and understanding, the prospects and desirability of naturalizing representations (accounting for how brain representations relate to the natural world), and much more.</p>





<ul class="wp-block-list"><li><a href="https://marksprevak.com/">Mark's website</a>.</li><li><a href="https://www.ed.ac.uk/profile/mazviita-chirimuuta">Mazviita's University of Edinburgh page</a>.</li><li>Twitter (Mark): <a href="https://twitter.com/msprevak">@msprevak</a>.</li><li>Mazviita's previous Brain Inspired episode:<ul><li><a href="https://braininspired.co/podcast/72/">BI 072 Mazviita Chirimuuta: Understanding, Prediction, and Reality</a></li></ul></li><li>The related book we discuss:<ul><li><a href="https://www.amazon.com/gp/product/0367733668/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=0367733668&amp;linkId=780824e84b72c8a7c58ce501b97bbce8">The Routledge Handbook of the Computational Mind 2018 Mark Sprevak Matteo Colombo (Editors)</a></li></ul></li></ul>



<p>0:00 - Intro
5:26 - Philosophy contributing to mind science
15:45 - Trend toward hyperspecialization
21:38 - Practice-focused philosophy of science
30:42 - Computationalism
33:05 - Philosophy of mind: identity theory, functionalism
38:18 - Computations as descriptions
41:27 - Pluralism and perspectivalism
54:18 - How much of brain function is computation?
1:02:11 - AI as computationalism
1:13:28 - Naturalizing representations
1:30:08 - Are you doing it right?</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1437/114.mp3" length="94498104" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Mark and Mazviita discuss the philosophy and science of mind, and how to think about computations with respect to understanding minds. Current approaches to explaining brain function are dominated by computational models and the computer metaphor for brain and mind. But there are alternative ways to think about the relation between computations and brain function, which we explore in the discussion. We also talk about the role of philosophy broadly and with respect to mind sciences, pluralism and perspectival approaches to truth and understanding, the prospects and desirability of naturalizing representations (accounting for how brain representations relate to the natural world), and much more.





Mark's website.Mazviita's University of Edinburgh page.Twitter (Mark): @msprevak.Mazviita's previous Brain Inspired episode:BI 072 Mazviita Chirimuuta: Understanding, Prediction, and RealityThe related book we discuss:The Routledge Handbook of the Computational Mind 2018 Mark Sprevak Matteo Colombo (Editors)



0:00 - Intro
5:26 - Philosophy contributing to mind science
15:45 - Trend toward hyperspecialization
21:38 - Practice-focused philosophy of science
30:42 - Computationalism
33:05 - Philosophy of mind: identity theory, functionalism
38:18 - Computations as descriptions
41:27 - Pluralism and perspectivalism
54:18 - How much of brain function is computation?
1:02:11 - AI as computationalism
1:13:28 - Naturalizing representations
1:30:08 - Are you doing it right?]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2021/09/art-114-01.png"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:38:07</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







Mark and Mazviita discuss the philosophy and science of mind, and how to think about computations with respect to understanding minds. Current approaches to explaining brain function are dominated by computational models and the computer metaphor for brain and mind. But there are alternative ways to think about the relation between computations and brain function, which we explore in the discussion. We also talk about the role of philosophy broadly and with respect to mind sciences, pluralism and perspectival approaches to truth and understanding, the prospects and desirability of naturalizing representations (accounting for how brain representations relate to the natural world), and much more.





Mark's website.Mazviita's University of Edinburgh page.Twitter (Mark): @msprevak.Mazviita's previous Brain Inspired episode:BI 072 Mazviita Chirimuuta: Understanding, Prediction, and RealityThe rela]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2021/09/art-114-01.png"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 113 David Barack and John Krakauer: Two Views On Cognition</title>
	<link>https://braininspired.co/podcast/113/</link>
	<pubDate>Sun, 12 Sep 2021 15:10:52 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1432</guid>
	<description><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>







<p>David and John discuss some of the concepts from their recent paper <a href="https://www.nature.com/articles/s41583-021-00448-6">Two Views on the Cognitive Brain</a>, in which they argue the recent population-based dynamical systems approach is a promising route to understanding brain activity underpinning higher cognition. We discuss mental representations, the kinds of dynamical objects being used for explanation, and much more, including David's perspectives as a practicing neuroscientist and philosopher.</p>





<ul class="wp-block-list"><li><a href="https://presidentialscholars.columbia.edu/directory/david-barack">David's webpage.</a></li><li><a href="http://blam-lab.org">John's Lab.</a></li><li>Twitter:&nbsp;<ul><li>David: <a href="https://twitter.com/dlbarack">@DLBarack</a></li><li>John: <a href="https://twitter.com/blamlab">@blamlab</a></li></ul></li><li>Paper: <a href="https://www.nature.com/articles/s41583-021-00448-6">Two Views on the Cognitive Brain</a>.</li><li>John's previous episodes:<ul><li><a href="https://braininspired.co/podcast/25/">BI 025 John Krakauer: Understanding Cognition</a></li><li><a href="https://braininspired.co/podcast/77/">BI 077 David and John Krakauer: Part 1</a></li><li><a href="https://braininspired.co/podcast/78/">BI 078 David and John Krakauer: Part 2</a></li></ul></li></ul>



<p>Timestamps</p>



<p>0:00 - Intro
3:13 - David's philosophy and neuroscience experience
20:01 - Renaissance person
24:36 - John's medical training 
31:58 - Two Views on the Cognitive Brain
44:18 - Representation
49:37 - Studying populations of neurons
1:05:17 - What counts as representation
1:18:49 - Does this approach matter for AI?</p>]]></description>
	<itunes:subtitle><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







David and John discuss some of the concepts from their recent paper Two Views on the Cognitive Brain, in which they argue the recent population-based dynamical sy]]></itunes:subtitle>
	<content:encoded><![CDATA[<p class="has-text-align-center"><strong><a href="https://www.patreon.com/braininspired">Support the show</a> to get full episodes, full archive, and join the Discord community</strong>.</p>







<p>David and John discuss some of the concepts from their recent paper <a href="https://www.nature.com/articles/s41583-021-00448-6">Two Views on the Cognitive Brain</a>, in which they argue the recent population-based dynamical systems approach is a promising route to understanding brain activity underpinning higher cognition. We discuss mental representations, the kinds of dynamical objects being used for explanation, and much more, including David's perspectives as a practicing neuroscientist and philosopher.</p>





<ul class="wp-block-list"><li><a href="https://presidentialscholars.columbia.edu/directory/david-barack">David's webpage.</a></li><li><a href="http://blam-lab.org">John's Lab.</a></li><li>Twitter:&nbsp;<ul><li>David: <a href="https://twitter.com/dlbarack">@DLBarack</a></li><li>John: <a href="https://twitter.com/blamlab">@blamlab</a></li></ul></li><li>Paper: <a href="https://www.nature.com/articles/s41583-021-00448-6">Two Views on the Cognitive Brain</a>.</li><li>John's previous episodes:<ul><li><a href="https://braininspired.co/podcast/25/">BI 025 John Krakauer: Understanding Cognition</a></li><li><a href="https://braininspired.co/podcast/77/">BI 077 David and John Krakauer: Part 1</a></li><li><a href="https://braininspired.co/podcast/78/">BI 078 David and John Krakauer: Part 2</a></li></ul></li></ul>



<p>Timestamps</p>



<p>0:00 - Intro
3:13 - David's philosophy and neuroscience experience
20:01 - Renaissance person
24:36 - John's medical training 
31:58 - Two Views on the Cognitive Brain
44:18 - Representation
49:37 - Studying populations of neurons
1:05:17 - What counts as representation
1:18:49 - Does this approach matter for AI?</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1432/113.mp3" length="87316082" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







David and John discuss some of the concepts from their recent paper Two Views on the Cognitive Brain, in which they argue the recent population-based dynamical systems approach is a promising route to understanding brain activity underpinning higher cognition. We discuss mental representations, the kinds of dynamical objects being used for explanation, and much more, including David's perspectives as a practicing neuroscientist and philosopher.





David's webpage.John's Lab.Twitter:&nbsp;David: @DLBarackJohn: @blamlabPaper: Two Views on the Cognitive Brain.John's previous episodes:BI 025 John Krakauer: Understanding CognitionBI 077 David and John Krakauer: Part 1BI 078 David and John Krakauer: Part 2



Timestamps



0:00 - Intro
3:13 - David's philosophy and neuroscience experience
20:01 - Renaissance person
24:36 - John's medical training 
31:58 - Two Views on the Cognitive Brain
44:18 - Representation
49:37 - Studying populations of neurons
1:05:17 - What counts as representation
1:18:49 - Does this approach matter for AI?]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2021/09/113-art-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:30:38</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Support the show to get full episodes, full archive, and join the Discord community.







David and John discuss some of the concepts from their recent paper Two Views on the Cognitive Brain, in which they argue the recent population-based dynamical systems approach is a promising route to understanding brain activity underpinning higher cognition. We discuss mental representations, the kinds of dynamical objects being used for explanation, and much more, including David's perspectives as a practicing neuroscientist and philosopher.





David's webpage.John's Lab.Twitter:&nbsp;David: @DLBarackJohn: @blamlabPaper: Two Views on the Cognitive Brain.John's previous episodes:BI 025 John Krakauer: Understanding CognitionBI 077 David and John Krakauer: Part 1BI 078 David and John Krakauer: Part 2



Timestamps



0:00 - Intro
3:13 - David's philosophy and neuroscience experience
20:01 - Renaissance person
24:36 - John's medical training 
31:58 - Two Views on the Cognitive Brain
44:18 - Re]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2021/09/113-art-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI ViDA Panel Discussion: Deep RL and Dopamine</title>
	<link>https://braininspired.co/podcast/vida-2021/</link>
	<pubDate>Thu, 02 Sep 2021 14:55:50 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1426</guid>
	<description><![CDATA[]]></description>
	<itunes:subtitle><![CDATA[]]></itunes:subtitle>
	<content:encoded><![CDATA[]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1426/vida-2021.mp3" length="55418977" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2021/09/vidaArtboard-1-1.png"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>00:57:25</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2021/09/vidaArtboard-1-1.png"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 112 Ali Mohebi and Ben Engelhard: The Many Faces of Dopamine</title>
	<link>https://braininspired.co/podcast/112/</link>
	<pubDate>Thu, 26 Aug 2021 15:42:35 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1414</guid>
	<description><![CDATA[<h4>BI 112:</h4>
<h2>Ali Mohebi and Ben Engelhard</h2>
<strong>The Many Faces of Dopamine</strong>
			
				
				
				
				
				<h2 style="text-align: center;">Announcement:</h2>
<p style="text-align: center;"><strong>Ben has started his new lab and is recruiting grad students. </strong></p>
<p style="text-align: center;"><strong>Check out his lab here and apply!</strong></p>
<h2 style="text-align: center;"><a href="https://engelhardlab.com/" style="color: #ccffff;">Engelhard Lab</a></h2>
<p>&nbsp;</p>
			
				
				
				
				
				
				
					
				
				
			
				
				
				
				
				<a href="https://www.patreon.com/braininspired" target="_blank"></a>
			
				
				
				
				
				Ali and Ben discuss the ever-expanding discoveries about the roles dopamine plays for our cognition. Dopamine is known to play a role in learning &#8211; dopamine (DA) neurons fire when our reward expectations aren&#8217;t met, and that signal helps adjust our expectation. Roughly, DA corresponds to a reward prediction error. The reward prediction error has helped reinforcement learning in AI develop into a raging success, specially with deep reinforcement learning models trained to out-perform humans in games like chess and Go. But DA likely contributes a lot more to brain function. We discuss many of those possible roles, how to think about computation with respect to neuromodulators like DA, how different time and spatial scales interact, and more.
			
				
				
				
				
				
			
				
				
				
				
				<p style="text-align: center;">Dopamine: A Simple <em>AND</em> Complex Story </p>
<p style="text-align: center;">by <a href="https://twitter.com/d4phn3c" style="color: #ffffff;">Daphne Cornelisse</a></p>
			
			
				
				
				
				
			
				
				
			
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
<h3>Guests </h3>
<ul>
<li>
<a href="https://mohebial.com/index.html" rev="en_rl_none">Ali Mohebi</a>
<ul>
<li>
<a href="https://twitter.com/mohebial" rev="en_rl_none">@mohebial </a>
</li>
</ul>
</li>
<li>
<a href="https://engelhardlab.com/" rev="en_rl_none">Ben Engelhard</a>
</li>
</ul>
<ul id="block-55c79398-4f69-4571-87c5-d1d5bd7261c3"></ul>

			
				
				
				
				
				<p>Timestamps:</p>

0:00 &#8211; Intro
5:02 &#8211; Virtual Dopamine Conference
9:56 &#8211; History of dopamine&#8217;s roles
16:47 &#8211; Dopamine circuits
21:13 &#8211; Multiple roles for dopamine
31:43 &#8211; Deep learning panel discussion
50:14 &#8211; Computation and neuromodulation]]></description>
	<itunes:subtitle><![CDATA[BI 112:
Ali Mohebi and Ben Engelhard
The Many Faces of Dopamine
			
				
				
				
				
				Announcement:
Ben has started his new lab and is recruiting grad students. 
Check out his lab here and apply!
Engelhard Lab
&nbsp;
			
				
				
				
				
				
		]]></itunes:subtitle>
	<content:encoded><![CDATA[<h4>BI 112:</h4>
<h2>Ali Mohebi and Ben Engelhard</h2>
<strong>The Many Faces of Dopamine</strong>
			
				
				
				
				
				<h2 style="text-align: center;">Announcement:</h2>
<p style="text-align: center;"><strong>Ben has started his new lab and is recruiting grad students. </strong></p>
<p style="text-align: center;"><strong>Check out his lab here and apply!</strong></p>
<h2 style="text-align: center;"><a href="https://engelhardlab.com/" style="color: #ccffff;">Engelhard Lab</a></h2>
<p>&nbsp;</p>
			
				
				
				
				
				
				
					
				
				
			
				
				
				
				
				<a href="https://www.patreon.com/braininspired" target="_blank"></a>
			
				
				
				
				
				Ali and Ben discuss the ever-expanding discoveries about the roles dopamine plays for our cognition. Dopamine is known to play a role in learning &#8211; dopamine (DA) neurons fire when our reward expectations aren&#8217;t met, and that signal helps adjust our expectation. Roughly, DA corresponds to a reward prediction error. The reward prediction error has helped reinforcement learning in AI develop into a raging success, specially with deep reinforcement learning models trained to out-perform humans in games like chess and Go. But DA likely contributes a lot more to brain function. We discuss many of those possible roles, how to think about computation with respect to neuromodulators like DA, how different time and spatial scales interact, and more.
			
				
				
				
				
				
			
				
				
				
				
				<p style="text-align: center;">Dopamine: A Simple <em>AND</em> Complex Story </p>
<p style="text-align: center;">by <a href="https://twitter.com/d4phn3c" style="color: #ffffff;">Daphne Cornelisse</a></p>
			
			
				
				
				
				
			
				
				
			
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
<h3>Guests </h3>
<ul>
<li>
<a href="https://mohebial.com/index.html" rev="en_rl_none">Ali Mohebi</a>
<ul>
<li>
<a href="https://twitter.com/mohebial" rev="en_rl_none">@mohebial </a>
</li>
</ul>
</li>
<li>
<a href="https://engelhardlab.com/" rev="en_rl_none">Ben Engelhard</a>
</li>
</ul>
<ul id="block-55c79398-4f69-4571-87c5-d1d5bd7261c3"></ul>

			
				
				
				
				
				<p>Timestamps:</p>

0:00 &#8211; Intro
5:02 &#8211; Virtual Dopamine Conference
9:56 &#8211; History of dopamine&#8217;s roles
16:47 &#8211; Dopamine circuits
21:13 &#8211; Multiple roles for dopamine
31:43 &#8211; Deep learning panel discussion
50:14 &#8211; Computation and neuromodulation]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1414/112.mp3" length="71279990" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[BI 112:
Ali Mohebi and Ben Engelhard
The Many Faces of Dopamine
			
				
				
				
				
				Announcement:
Ben has started his new lab and is recruiting grad students. 
Check out his lab here and apply!
Engelhard Lab
&nbsp;
			
				
				
				
				
				
				
					
				
				
			
				
				
				
				
				
			
				
				
				
				
				Ali and Ben discuss the ever-expanding discoveries about the roles dopamine plays for our cognition. Dopamine is known to play a role in learning &#8211; dopamine (DA) neurons fire when our reward expectations aren&#8217;t met, and that signal helps adjust our expectation. Roughly, DA corresponds to a reward prediction error. The reward prediction error has helped reinforcement learning in AI develop into a raging success, specially with deep reinforcement learning models trained to out-perform humans in games like chess and Go. But DA likely contributes a lot more to brain function. We discuss many of those possible roles, how to think about computation with respect to neuromodulators like DA, how different time and spatial scales interact, and more.
			
				
				
				
				
				
			
				
				
				
				
				Dopamine: A Simple AND Complex Story 
by Daphne Cornelisse
			
			
				
				
				
				
			
				
				
			
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
Guests 


Ali Mohebi


@mohebial 




Ben Engelhard




			
				
				
				
				
				Timestamps:

0:00 &#8211; Intro
5:02 &#8211; Virtual Dopamine Conference
9:56 &#8211; History of dopamine&#8217;s roles
16:47 &#8211; Dopamine circuits
21:13 &#8211; Multiple roles for dopamine
31:43 &#8211; Deep learning panel discussion
50:14 &#8211; Computation and neuromodulation]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2021/08/art-112-3-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:13:56</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[BI 112:
Ali Mohebi and Ben Engelhard
The Many Faces of Dopamine
			
				
				
				
				
				Announcement:
Ben has started his new lab and is recruiting grad students. 
Check out his lab here and apply!
Engelhard Lab
&nbsp;
			
				
				
				
				
				
				
					
				
				
			
				
				
				
				
				
			
				
				
				
				
				Ali and Ben discuss the ever-expanding discoveries about the roles dopamine plays for our cognition. Dopamine is known to play a role in learning &#8211; dopamine (DA) neurons fire when our reward expectations aren&#8217;t met, and that signal helps adjust our expectation. Roughly, DA corresponds to a reward prediction error. The reward prediction error has helped reinforcement learning in AI develop into a raging success, specially with deep reinforcement learning models trained to out-perform humans in games like chess and Go. But DA likely contributes a lot more to brain function. We discuss many of those possible roles, how to think about computation with r]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2021/08/art-112-3-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI NMA 06: Advancing Neuro Deep Learning Panel</title>
	<link>https://braininspired.co/podcast/nma-6/</link>
	<pubDate>Thu, 19 Aug 2021 13:48:29 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1403</guid>
	<description><![CDATA[]]></description>
	<itunes:subtitle><![CDATA[]]></itunes:subtitle>
	<content:encoded><![CDATA[]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1403/nma-6.mp3" length="77609938" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2021/08/art-nma-6-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:20:32</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2021/08/art-nma-6-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI NMA 05: NLP and Generative Models Panel</title>
	<link>https://braininspired.co/podcast/nma-5/</link>
	<pubDate>Fri, 13 Aug 2021 14:11:40 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1288</guid>
	<description><![CDATA[<h4>BI NMA 05:</h4>
<strong>NLP and Generative Models Panel</strong>
			
				
				
				
				
				
				
					
				
				
			
				
				
				
				
				<a href="https://www.patreon.com/braininspired" target="_blank"></a>
			
				
				
				
				
				<p style="text-align: left;">This is the 5th in a series of panel discussions in collaboration with <a href="https://academy.neuromatch.io/home">Neuromatch Academy</a>, the online computational neuroscience summer school. This is the 2nd of 3 in the deep learning series. In this episode, the panelists discuss their experiences &#8220;doing more with fewer parameters: Convnets, RNNs, attention &amp; transformers, generative models (VAEs &amp; GANs).</p>
			
			
				
				
				
				
			
				
				
			
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
<h3>Panelists</h3>
<p> </p>
<ul>
<li><a href="https://psych.la.psu.edu/directory/bpw10">Brad Wyble.</a>
<ul>
<li><a href="https://twitter.com/bradpwyble">@bradpwyble</a>.</li>
</ul>
</li>
<li><a href="https://www.kyunghyuncho.me/">Kyunghyun Cho.</a>
<ul>
<li><a href="https://twitter.com/kchonyc">@kchonyc</a>.</li>
</ul>
</li>
<li><a href="https://hhexiy.github.io/">He He</a>.
<ul>
<li><a href="https://twitter.com/hhexiy">@hhexiy.</a></li>
</ul>
</li>
<li><a href="https://www.stern.nyu.edu/faculty/bio/joao-sedoc">João Sedoc.</a>
<ul>
<li><a href="https://twitter.com/JoaoSedoc">@JoaoSedoc</a>.</li>
</ul>
</li>
</ul>
<p>The other panels: </p>
<ul id="block-55c79398-4f69-4571-87c5-d1d5bd7261c3">
<li><a href="https://braininspired.co/podcast/nma-1/">First panel</a>, about model fitting, GLMs/machine learning, dimensionality reduction, and deep learning.</li>
<li><a href="https://braininspired.co/podcast/nma-2/">Second panel</a>, about linear systems, real neurons, and dynamic networks.</li>
<li><a href="https://braininspired.co/podcast/nma-3/">Third panel</a>, about stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality.</li>
<li><a href="https://braininspired.co/podcast/nma-4/">Fourth panel</a>, about some basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, &amp; regularization.</li>
<li><a href="https://braininspired.co/podcast/nma-6/">Sixth panel</a>, about advanced topics in deep learning: unsupervised &amp; self-supervised learning, reinforcement learning, continual learning/causality.</li>
</ul>]]></description>
	<itunes:subtitle><![CDATA[BI NMA 05:
NLP and Generative Models Panel
			
				
				
				
				
				
				
					
				
				
			
				
				
				
				
				
			
				
				
				
				
				This is the 5th in a series of panel discussions in collaboration with Neuromatch Academy, the online co]]></itunes:subtitle>
	<content:encoded><![CDATA[<h4>BI NMA 05:</h4>
<strong>NLP and Generative Models Panel</strong>
			
				
				
				
				
				
				
					
				
				
			
				
				
				
				
				<a href="https://www.patreon.com/braininspired" target="_blank"></a>
			
				
				
				
				
				<p style="text-align: left;">This is the 5th in a series of panel discussions in collaboration with <a href="https://academy.neuromatch.io/home">Neuromatch Academy</a>, the online computational neuroscience summer school. This is the 2nd of 3 in the deep learning series. In this episode, the panelists discuss their experiences &#8220;doing more with fewer parameters: Convnets, RNNs, attention &amp; transformers, generative models (VAEs &amp; GANs).</p>
			
			
				
				
				
				
			
				
				
			
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
<h3>Panelists</h3>
<p> </p>
<ul>
<li><a href="https://psych.la.psu.edu/directory/bpw10">Brad Wyble.</a>
<ul>
<li><a href="https://twitter.com/bradpwyble">@bradpwyble</a>.</li>
</ul>
</li>
<li><a href="https://www.kyunghyuncho.me/">Kyunghyun Cho.</a>
<ul>
<li><a href="https://twitter.com/kchonyc">@kchonyc</a>.</li>
</ul>
</li>
<li><a href="https://hhexiy.github.io/">He He</a>.
<ul>
<li><a href="https://twitter.com/hhexiy">@hhexiy.</a></li>
</ul>
</li>
<li><a href="https://www.stern.nyu.edu/faculty/bio/joao-sedoc">João Sedoc.</a>
<ul>
<li><a href="https://twitter.com/JoaoSedoc">@JoaoSedoc</a>.</li>
</ul>
</li>
</ul>
<p>The other panels: </p>
<ul id="block-55c79398-4f69-4571-87c5-d1d5bd7261c3">
<li><a href="https://braininspired.co/podcast/nma-1/">First panel</a>, about model fitting, GLMs/machine learning, dimensionality reduction, and deep learning.</li>
<li><a href="https://braininspired.co/podcast/nma-2/">Second panel</a>, about linear systems, real neurons, and dynamic networks.</li>
<li><a href="https://braininspired.co/podcast/nma-3/">Third panel</a>, about stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality.</li>
<li><a href="https://braininspired.co/podcast/nma-4/">Fourth panel</a>, about some basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, &amp; regularization.</li>
<li><a href="https://braininspired.co/podcast/nma-6/">Sixth panel</a>, about advanced topics in deep learning: unsupervised &amp; self-supervised learning, reinforcement learning, continual learning/causality.</li>
</ul>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1288/nma-5.mp3" length="80782243" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[BI NMA 05:
NLP and Generative Models Panel
			
				
				
				
				
				
				
					
				
				
			
				
				
				
				
				
			
				
				
				
				
				This is the 5th in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. This is the 2nd of 3 in the deep learning series. In this episode, the panelists discuss their experiences &#8220;doing more with fewer parameters: Convnets, RNNs, attention &amp; transformers, generative models (VAEs &amp; GANs).
			
			
				
				
				
				
			
				
				
			
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
Panelists
 

Brad Wyble.

@bradpwyble.


Kyunghyun Cho.

@kchonyc.


He He.

@hhexiy.


João Sedoc.

@JoaoSedoc.



The other panels: 

First panel, about model fitting, GLMs/machine learning, dimensionality reduction, and deep learning.
Second panel, about linear systems, real neurons, and dynamic networks.
Third panel, about stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality.
Fourth panel, about some basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, &amp; regularization.
Sixth panel, about advanced topics in deep learning: unsupervised &amp; self-supervised learning, reinforcement learning, continual learning/causality.]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2021/08/art-nma-5-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:23:50</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[BI NMA 05:
NLP and Generative Models Panel
			
				
				
				
				
				
				
					
				
				
			
				
				
				
				
				
			
				
				
				
				
				This is the 5th in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. This is the 2nd of 3 in the deep learning series. In this episode, the panelists discuss their experiences &#8220;doing more with fewer parameters: Convnets, RNNs, attention &amp; transformers, generative models (VAEs &amp; GANs).
			
			
				
				
				
				
			
				
				
			
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
Panelists
 

Brad Wyble.

@bradpwyble.


Kyunghyun Cho.

@kchonyc.


He He.

@hhexiy.


João Sedoc.

@JoaoSedoc.



The other panels: 

First panel, about model fitting, GLMs/machine learning, dimensionality reduction, and deep learning.
Second panel, about linear systems, real neurons, and dynamic networks.
Third panel, about stochasti]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2021/08/art-nma-5-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI NMA 04: Deep Learning Basics Panel</title>
	<link>https://braininspired.co/podcast/nma-4/</link>
	<pubDate>Fri, 06 Aug 2021 13:37:19 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1270</guid>
	<description><![CDATA[<h4>BI NMA 04:</h4>
<strong>Deep Learning Basics Panel</strong>
			
				
				
				
				
				
				
					
				
				
			
				
				
				
				
				<a href="https://www.patreon.com/braininspired" target="_blank"></a>
			
				
				
				
				
				<p style="text-align: left;">This is the 4th in a series of panel discussions in collaboration with <a href="https://academy.neuromatch.io/home">Neuromatch Academy</a>, the online computational neuroscience summer school. This is the first of 3 in the deep learning series. In this episode, the panelists discuss their experiences with some basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, &amp; regularization.</p>
			
			
				
				
				
				
			
				
				
			
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				</p>
<h3>Guests </h3>
<ul>
<li><a href="https://www.linkedin.com/in/amitakapoor/?originalSubdomain=in">Amita Kapoor</a></li>
<li><a href="https://www.cis.upenn.edu/~ungar/">Lyle Ungar</a>
<ul>
<li><a href="https://twitter.com/LyleUngar">@LyleUngar</a></li>
</ul>
</li>
<li><a href="https://ganguli-gang.stanford.edu/surya.html">Surya Ganguli</a>
<ul>
<li><a href="https://twitter.com/SuryaGanguli">@SuryaGanguli</a></li>
</ul>
</li>
</ul>
<h4>The other panels: </h4>
<ul id="block-55c79398-4f69-4571-87c5-d1d5bd7261c3">
<li><a href="https://braininspired.co/podcast/nma-1/">First panel</a>, about model fitting, GLMs/machine learning, dimensionality reduction, and deep learning.</li>
<li><a href="https://braininspired.co/podcast/nma-2/">Second panel</a>, about linear systems, real neurons, and dynamic networks.</li>
<li><a href="https://braininspired.co/podcast/nma-3/">Third panel</a>, about stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality.</li>
<li><a href="https://braininspired.co/podcast/nma-5/">Fifth panel</a>, about “doing more with fewer parameters: Convnets, RNNs, attention &amp; transformers, generative models (VAEs &amp; GANs).</li>
<li><a href="https://braininspired.co/podcast/nma-6/">Sixth panel</a>, about advanced topics in deep learning: unsupervised &amp; self-supervised learning, reinforcement learning, continual learning/causality.</li>
</ul>
<p>&nbsp;</p>
<p>
			
				
				
				
				
				Timestamps:</p>
<p>&nbsp;]]></description>
	<itunes:subtitle><![CDATA[BI NMA 04:
Deep Learning Basics Panel
			
				
				
				
				
				
				
					
				
				
			
				
				
				
				
				
			
				
				
				
				
				This is the 4th in a series of panel discussions in collaboration with Neuromatch Academy, the online computa]]></itunes:subtitle>
	<content:encoded><![CDATA[<h4>BI NMA 04:</h4>
<strong>Deep Learning Basics Panel</strong>
			
				
				
				
				
				
				
					
				
				
			
				
				
				
				
				<a href="https://www.patreon.com/braininspired" target="_blank"></a>
			
				
				
				
				
				<p style="text-align: left;">This is the 4th in a series of panel discussions in collaboration with <a href="https://academy.neuromatch.io/home">Neuromatch Academy</a>, the online computational neuroscience summer school. This is the first of 3 in the deep learning series. In this episode, the panelists discuss their experiences with some basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, &amp; regularization.</p>
			
			
				
				
				
				
			
				
				
			
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				</p>
<h3>Guests </h3>
<ul>
<li><a href="https://www.linkedin.com/in/amitakapoor/?originalSubdomain=in">Amita Kapoor</a></li>
<li><a href="https://www.cis.upenn.edu/~ungar/">Lyle Ungar</a>
<ul>
<li><a href="https://twitter.com/LyleUngar">@LyleUngar</a></li>
</ul>
</li>
<li><a href="https://ganguli-gang.stanford.edu/surya.html">Surya Ganguli</a>
<ul>
<li><a href="https://twitter.com/SuryaGanguli">@SuryaGanguli</a></li>
</ul>
</li>
</ul>
<h4>The other panels: </h4>
<ul id="block-55c79398-4f69-4571-87c5-d1d5bd7261c3">
<li><a href="https://braininspired.co/podcast/nma-1/">First panel</a>, about model fitting, GLMs/machine learning, dimensionality reduction, and deep learning.</li>
<li><a href="https://braininspired.co/podcast/nma-2/">Second panel</a>, about linear systems, real neurons, and dynamic networks.</li>
<li><a href="https://braininspired.co/podcast/nma-3/">Third panel</a>, about stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality.</li>
<li><a href="https://braininspired.co/podcast/nma-5/">Fifth panel</a>, about “doing more with fewer parameters: Convnets, RNNs, attention &amp; transformers, generative models (VAEs &amp; GANs).</li>
<li><a href="https://braininspired.co/podcast/nma-6/">Sixth panel</a>, about advanced topics in deep learning: unsupervised &amp; self-supervised learning, reinforcement learning, continual learning/causality.</li>
</ul>
<p>&nbsp;</p>
<p>
			
				
				
				
				
				Timestamps:</p>
<p>&nbsp;]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1270/nma-4.mp3" length="57282479" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[BI NMA 04:
Deep Learning Basics Panel
			
				
				
				
				
				
				
					
				
				
			
				
				
				
				
				
			
				
				
				
				
				This is the 4th in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. This is the first of 3 in the deep learning series. In this episode, the panelists discuss their experiences with some basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, &amp; regularization.
			
			
				
				
				
				
			
				
				
			
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
Guests 

Amita Kapoor
Lyle Ungar

@LyleUngar


Surya Ganguli

@SuryaGanguli



The other panels: 

First panel, about model fitting, GLMs/machine learning, dimensionality reduction, and deep learning.
Second panel, about linear systems, real neurons, and dynamic networks.
Third panel, about stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality.
Fifth panel, about “doing more with fewer parameters: Convnets, RNNs, attention &amp; transformers, generative models (VAEs &amp; GANs).
Sixth panel, about advanced topics in deep learning: unsupervised &amp; self-supervised learning, reinforcement learning, continual learning/causality.

&nbsp;

			
				
				
				
				
				Timestamps:
&nbsp;]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2021/08/art-nma-4-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>00:59:21</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[BI NMA 04:
Deep Learning Basics Panel
			
				
				
				
				
				
				
					
				
				
			
				
				
				
				
				
			
				
				
				
				
				This is the 4th in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. This is the first of 3 in the deep learning series. In this episode, the panelists discuss their experiences with some basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, &amp; regularization.
			
			
				
				
				
				
			
				
				
			
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
				
Guests 

Amita Kapoor
Lyle Ungar

@LyleUngar


Surya Ganguli

@SuryaGanguli



The other panels: 

First panel, about model fitting, GLMs/machine learning, dimensionality reduction, and deep learning.
Second panel, about linear systems, real neurons, and dynamic networks.
Third panel, about stochastic processes, including Baye]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2021/08/art-nma-4-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 111 Kevin Mitchell and Erik Hoel: Agency, Emergence, Consciousness</title>
	<link>https://braininspired.co/podcast/111/</link>
	<pubDate>Wed, 28 Jul 2021 03:08:10 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1260</guid>
	<description><![CDATA[<p>Erik, Kevin, and I discuss... well a lot of things. </p>



<p>Erik's recent novel The Revelations is a story about a group of neuroscientists trying to develop a good theory of consciousness (with a murder mystery plot). </p>



<p>Kevin's book Innate - How the Wiring of Our Brains Shapes Who We Are describes the messy process of getting from DNA, traversing epigenetics and development, to our personalities. </p>



<p>We talk about both books, then dive deeper into topics like whether brains evolved for moving our bodies vs. consciousness, how information theory is lending insights to emergent phenomena, and the role of agency with respect to what counts as intelligence.</p>









<ul class="wp-block-list"><li><a href="https://www.kjmitchell.com/">Kevin's website</a>.</li><li><a href="https://www.erikphoel.com/">Eriks' website</a>.</li><li>Twitter:&nbsp;<a href="https://twitter.com/WiringTheBrain?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor">@WiringtheBrain</a> (Kevin); <a href="https://twitter.com/erikphoel">@erikphoel</a> (Erik)</li><li>Books:<ul><li><a href="https://www.amazon.com/gp/product/B07CSHZRGN/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=B07CSHZRGN&amp;linkId=31d60129dc19af78bb1b5851e69f1db8">INNATE – How the Wiring of Our Brains Shapes Who We Are</a></li><li><a href="https://www.amazon.com/gp/product/1419750224/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=1419750224&amp;linkId=d9e2348c2893510421c9f2afc94442e7">The Revelations</a></li></ul></li><li>Papers<ul><li>Erik<ul><li><a href="https://arxiv.org/abs/2004.03541">Falsification and consciousness</a>.</li><li><a href="https://www.hindawi.com/journals/complexity/2020/8932526/">The emergence of informative higher scales in complex networks</a>.</li><li><a href="https://arxiv.org/abs/2104.13368">Emergence as the conversion of information: A unifying theory</a>.</li></ul></li></ul></li></ul>



<p>Timestamps</p>



<p>0:00 - Intro
3:28 - The Revelations - Erik's novel
15:15 - Innate - Kevin's book
22:56 - Cycle of progress
29:05 - Brains for movement or consciousness?
46:46 - Freud's influence
59:18 - Theories of consciousness
1:02:02 - Meaning and emergence
1:05:50 - Reduction in neuroscience
1:23:03 - Micro and macro - emergence
1:29:35 - Agency and intelligence
</p>]]></description>
	<itunes:subtitle><![CDATA[Erik, Kevin, and I discuss... well a lot of things. 



Eriks recent novel The Revelations is a story about a group of neuroscientists trying to develop a good theory of consciousness (with a murder mystery plot). 



Kevins book Innate - How the Wiring ]]></itunes:subtitle>
	<content:encoded><![CDATA[<p>Erik, Kevin, and I discuss... well a lot of things. </p>



<p>Erik's recent novel The Revelations is a story about a group of neuroscientists trying to develop a good theory of consciousness (with a murder mystery plot). </p>



<p>Kevin's book Innate - How the Wiring of Our Brains Shapes Who We Are describes the messy process of getting from DNA, traversing epigenetics and development, to our personalities. </p>



<p>We talk about both books, then dive deeper into topics like whether brains evolved for moving our bodies vs. consciousness, how information theory is lending insights to emergent phenomena, and the role of agency with respect to what counts as intelligence.</p>









<ul class="wp-block-list"><li><a href="https://www.kjmitchell.com/">Kevin's website</a>.</li><li><a href="https://www.erikphoel.com/">Eriks' website</a>.</li><li>Twitter:&nbsp;<a href="https://twitter.com/WiringTheBrain?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor">@WiringtheBrain</a> (Kevin); <a href="https://twitter.com/erikphoel">@erikphoel</a> (Erik)</li><li>Books:<ul><li><a href="https://www.amazon.com/gp/product/B07CSHZRGN/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=B07CSHZRGN&amp;linkId=31d60129dc19af78bb1b5851e69f1db8">INNATE – How the Wiring of Our Brains Shapes Who We Are</a></li><li><a href="https://www.amazon.com/gp/product/1419750224/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=1419750224&amp;linkId=d9e2348c2893510421c9f2afc94442e7">The Revelations</a></li></ul></li><li>Papers<ul><li>Erik<ul><li><a href="https://arxiv.org/abs/2004.03541">Falsification and consciousness</a>.</li><li><a href="https://www.hindawi.com/journals/complexity/2020/8932526/">The emergence of informative higher scales in complex networks</a>.</li><li><a href="https://arxiv.org/abs/2104.13368">Emergence as the conversion of information: A unifying theory</a>.</li></ul></li></ul></li></ul>



<p>Timestamps</p>



<p>0:00 - Intro
3:28 - The Revelations - Erik's novel
15:15 - Innate - Kevin's book
22:56 - Cycle of progress
29:05 - Brains for movement or consciousness?
46:46 - Freud's influence
59:18 - Theories of consciousness
1:02:02 - Meaning and emergence
1:05:50 - Reduction in neuroscience
1:23:03 - Micro and macro - emergence
1:29:35 - Agency and intelligence
</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1260/111.mp3" length="94450233" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Erik, Kevin, and I discuss... well a lot of things. 



Erik's recent novel The Revelations is a story about a group of neuroscientists trying to develop a good theory of consciousness (with a murder mystery plot). 



Kevin's book Innate - How the Wiring of Our Brains Shapes Who We Are describes the messy process of getting from DNA, traversing epigenetics and development, to our personalities. 



We talk about both books, then dive deeper into topics like whether brains evolved for moving our bodies vs. consciousness, how information theory is lending insights to emergent phenomena, and the role of agency with respect to what counts as intelligence.









Kevin's website.Eriks' website.Twitter:&nbsp;@WiringtheBrain (Kevin); @erikphoel (Erik)Books:INNATE – How the Wiring of Our Brains Shapes Who We AreThe RevelationsPapersErikFalsification and consciousness.The emergence of informative higher scales in complex networks.Emergence as the conversion of information: A unifying theory.



Timestamps



0:00 - Intro
3:28 - The Revelations - Erik's novel
15:15 - Innate - Kevin's book
22:56 - Cycle of progress
29:05 - Brains for movement or consciousness?
46:46 - Freud's influence
59:18 - Theories of consciousness
1:02:02 - Meaning and emergence
1:05:50 - Reduction in neuroscience
1:23:03 - Micro and macro - emergence
1:29:35 - Agency and intelligence]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2021/07/111-art-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:38:04</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Erik, Kevin, and I discuss... well a lot of things. 



Erik's recent novel The Revelations is a story about a group of neuroscientists trying to develop a good theory of consciousness (with a murder mystery plot). 



Kevin's book Innate - How the Wiring of Our Brains Shapes Who We Are describes the messy process of getting from DNA, traversing epigenetics and development, to our personalities. 



We talk about both books, then dive deeper into topics like whether brains evolved for moving our bodies vs. consciousness, how information theory is lending insights to emergent phenomena, and the role of agency with respect to what counts as intelligence.









Kevin's website.Eriks' website.Twitter:&nbsp;@WiringtheBrain (Kevin); @erikphoel (Erik)Books:INNATE – How the Wiring of Our Brains Shapes Who We AreThe RevelationsPapersErikFalsification and consciousness.The emergence of informative higher scales in complex networks.Emergence as the conversion of information: A unifying theory]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2021/07/111-art-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI NMA 03: Stochastic Processes Panel</title>
	<link>https://braininspired.co/podcast/nma-3/</link>
	<pubDate>Thu, 22 Jul 2021 15:47:23 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1256</guid>
	<description><![CDATA[<p>Panelists:</p>



<ul class="wp-block-list"><li><a href="https://nivlab.princeton.edu/" target="_blank" rel="noreferrer noopener">Yael Niv</a>.<ul><li><a href="https://twitter.com/yael_niv">@yael_niv</a></li></ul></li><li><a href="http://koerding.com/">Konrad Kording</a><ul><li><a href="https://twitter.com/KordingLab">@KordingLab</a>.</li><li>Previous BI episodes:<ul><li><a href="https://braininspired.co/podcast/27/">BI 027 Ioana Marinescu &amp; Konrad Kording: Causality in Quasi-Experiments</a>.</li><li><a href="https://braininspired.co/wp-admin/post.php?post=588&amp;action=edit">BI 014 Konrad Kording: Regulators, Mount Up!</a></li></ul></li></ul></li><li><a href="https://gershmanlab.com/index.html">Sam Gershman</a>.<ul><li><a rel="noreferrer noopener" href="https://twitter.com/gershbrain" target="_blank">@gershbrain</a>.</li><li>Previous BI episodes:<ul><li><a href="https://braininspired.co/wp-admin/post.php?post=1154&amp;action=edit">BI 095 Chris Summerfield and Sam Gershman: Neuro for AI?</a></li><li><a href="https://braininspired.co/wp-admin/post.php?post=652&amp;action=edit">BI 028 Sam Gershman: Free Energy Principle &amp; Human Machines</a>.</li></ul></li></ul></li><li><a href="https://www.ndcn.ox.ac.uk/team/timothy-behrens">Tim Behrens</a>.<ul><li><a href="https://twitter.com/behrenstimb">@behrenstim</a>.</li><li>Previous BI episodes:<ul><li><a href="https://braininspired.co/wp-admin/post.php?post=730&amp;action=edit">BI 035 Tim Behrens: Abstracting &amp; Generalizing Knowledge, &amp; Human Replay</a>.</li><li><a href="https://braininspired.co/wp-admin/post.php?post=630&amp;action=edit">BI 024 Tim Behrens: Cognitive Maps</a>.</li></ul></li></ul></li></ul>



<p>This is the third in a series of panel discussions in collaboration with <a href="https://academy.neuromatch.io/home">Neuromatch Academy</a>, the online computational neuroscience summer school. In this episode, the panelists discuss their experiences with stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality.</p>



<p>The other panels: </p>



<ul class="wp-block-list"><li><a href="https://braininspired.co/podcast/nma-1/">First panel</a>, about model fitting, GLMs/machine learning, dimensionality reduction, and deep learning.</li><li><a href="https://braininspired.co/podcast/nma-2/">Second panel</a>, about linear systems, real neurons, and dynamic networks.</li><li><a href="https://braininspired.co/podcast/nma-4/">Fourth panel</a>, about basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, &amp; regularization.</li><li><a href="https://braininspired.co/podcast/nma-5/">Fifth panel</a>, about “doing more with fewer parameters: Convnets, RNNs, attention &amp; transformers, generative models (VAEs &amp; GANs).</li><li><a href="https://braininspired.co/podcast/nma-6/">Sixth panel</a>, about advanced topics in deep learning: unsupervised &amp; self-supervised learning, reinforcement learning, continual learning/causality.</li></ul>]]></description>
	<itunes:subtitle><![CDATA[Panelists:



Yael Niv.@yael_nivKonrad Kording@KordingLab.Previous BI episodes:BI 027 Ioana Marinescu &amp; Konrad Kording: Causality in Quasi-Experiments.BI 014 Konrad Kording: Regulators, Mount Up!Sam Gershman.@gershbrain.Previous BI episodes:BI 095 Ch]]></itunes:subtitle>
	<content:encoded><![CDATA[<p>Panelists:</p>



<ul class="wp-block-list"><li><a href="https://nivlab.princeton.edu/" target="_blank" rel="noreferrer noopener">Yael Niv</a>.<ul><li><a href="https://twitter.com/yael_niv">@yael_niv</a></li></ul></li><li><a href="http://koerding.com/">Konrad Kording</a><ul><li><a href="https://twitter.com/KordingLab">@KordingLab</a>.</li><li>Previous BI episodes:<ul><li><a href="https://braininspired.co/podcast/27/">BI 027 Ioana Marinescu &amp; Konrad Kording: Causality in Quasi-Experiments</a>.</li><li><a href="https://braininspired.co/wp-admin/post.php?post=588&amp;action=edit">BI 014 Konrad Kording: Regulators, Mount Up!</a></li></ul></li></ul></li><li><a href="https://gershmanlab.com/index.html">Sam Gershman</a>.<ul><li><a rel="noreferrer noopener" href="https://twitter.com/gershbrain" target="_blank">@gershbrain</a>.</li><li>Previous BI episodes:<ul><li><a href="https://braininspired.co/wp-admin/post.php?post=1154&amp;action=edit">BI 095 Chris Summerfield and Sam Gershman: Neuro for AI?</a></li><li><a href="https://braininspired.co/wp-admin/post.php?post=652&amp;action=edit">BI 028 Sam Gershman: Free Energy Principle &amp; Human Machines</a>.</li></ul></li></ul></li><li><a href="https://www.ndcn.ox.ac.uk/team/timothy-behrens">Tim Behrens</a>.<ul><li><a href="https://twitter.com/behrenstimb">@behrenstim</a>.</li><li>Previous BI episodes:<ul><li><a href="https://braininspired.co/wp-admin/post.php?post=730&amp;action=edit">BI 035 Tim Behrens: Abstracting &amp; Generalizing Knowledge, &amp; Human Replay</a>.</li><li><a href="https://braininspired.co/wp-admin/post.php?post=630&amp;action=edit">BI 024 Tim Behrens: Cognitive Maps</a>.</li></ul></li></ul></li></ul>



<p>This is the third in a series of panel discussions in collaboration with <a href="https://academy.neuromatch.io/home">Neuromatch Academy</a>, the online computational neuroscience summer school. In this episode, the panelists discuss their experiences with stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality.</p>



<p>The other panels: </p>



<ul class="wp-block-list"><li><a href="https://braininspired.co/podcast/nma-1/">First panel</a>, about model fitting, GLMs/machine learning, dimensionality reduction, and deep learning.</li><li><a href="https://braininspired.co/podcast/nma-2/">Second panel</a>, about linear systems, real neurons, and dynamic networks.</li><li><a href="https://braininspired.co/podcast/nma-4/">Fourth panel</a>, about basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, &amp; regularization.</li><li><a href="https://braininspired.co/podcast/nma-5/">Fifth panel</a>, about “doing more with fewer parameters: Convnets, RNNs, attention &amp; transformers, generative models (VAEs &amp; GANs).</li><li><a href="https://braininspired.co/podcast/nma-6/">Sixth panel</a>, about advanced topics in deep learning: unsupervised &amp; self-supervised learning, reinforcement learning, continual learning/causality.</li></ul>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1256/nma-3.mp3" length="58663415" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Panelists:



Yael Niv.@yael_nivKonrad Kording@KordingLab.Previous BI episodes:BI 027 Ioana Marinescu &amp; Konrad Kording: Causality in Quasi-Experiments.BI 014 Konrad Kording: Regulators, Mount Up!Sam Gershman.@gershbrain.Previous BI episodes:BI 095 Chris Summerfield and Sam Gershman: Neuro for AI?BI 028 Sam Gershman: Free Energy Principle &amp; Human Machines.Tim Behrens.@behrenstim.Previous BI episodes:BI 035 Tim Behrens: Abstracting &amp; Generalizing Knowledge, &amp; Human Replay.BI 024 Tim Behrens: Cognitive Maps.



This is the third in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. In this episode, the panelists discuss their experiences with stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality.



The other panels: 



First panel, about model fitting, GLMs/machine learning, dimensionality reduction, and deep learning.Second panel, about linear systems, real neurons, and dynamic networks.Fourth panel, about basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, &amp; regularization.Fifth panel, about “doing more with fewer parameters: Convnets, RNNs, attention &amp; transformers, generative models (VAEs &amp; GANs).Sixth panel, about advanced topics in deep learning: unsupervised &amp; self-supervised learning, reinforcement learning, continual learning/causality.]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2021/07/art-nma-3-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:00:48</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Panelists:



Yael Niv.@yael_nivKonrad Kording@KordingLab.Previous BI episodes:BI 027 Ioana Marinescu &amp; Konrad Kording: Causality in Quasi-Experiments.BI 014 Konrad Kording: Regulators, Mount Up!Sam Gershman.@gershbrain.Previous BI episodes:BI 095 Chris Summerfield and Sam Gershman: Neuro for AI?BI 028 Sam Gershman: Free Energy Principle &amp; Human Machines.Tim Behrens.@behrenstim.Previous BI episodes:BI 035 Tim Behrens: Abstracting &amp; Generalizing Knowledge, &amp; Human Replay.BI 024 Tim Behrens: Cognitive Maps.



This is the third in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. In this episode, the panelists discuss their experiences with stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality.



The other panels: 



First panel, about model fitting, GLMs/machine learning, dimensionality reduction, and deep learning.Second panel, about lin]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2021/07/art-nma-3-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI NMA 02: Dynamical Systems Panel</title>
	<link>https://braininspired.co/podcast/nma-2/</link>
	<pubDate>Thu, 15 Jul 2021 14:36:00 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1254</guid>
	<description><![CDATA[<p>Panelists:</p>



<ul class="wp-block-list"><li><a href="https://fairhalllab.com/">Adrienne Fairhall</a>.<ul><li><a href="https://twitter.com/alfairhall">@alfairhall</a>.</li></ul></li><li><a href="http://bingbrunton.com">Bing Brunton</a>.<ul><li><a href="https://twitter.com/bingbrunton">@bingbrunton</a>.</li></ul></li><li><a href="https://www.rajanlab.com/">Kanaka Rajan</a>.<ul><li><a href="https://twitter.com/rajankdr?lang=en">@rajankdr</a>.</li><li><a href="https://braininspired.co/podcast/54/">BI 054 Kanaka Rajan: How Do We Switch Behaviors?</a></li></ul></li></ul>



<p>This is the second in a series of panel discussions in collaboration with <a href="https://academy.neuromatch.io/home">Neuromatch Academy</a>, the online computational neuroscience summer school. In this episode, the panelists discuss their experiences with linear systems, real neurons, and dynamic networks. </p>



<p>Other panels:</p>



<ul class="wp-block-list"><li> <a href="https://braininspired.co/podcast/nma-1/">First panel</a>, about model fitting, GLMs/machine learning, dimensionality reduction, and deep learning.</li><li><a href="https://braininspired.co/podcast/nma-3/">Third panel</a>, about stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality.</li><li><a href="https://braininspired.co/podcast/nma-4/">Fourth panel</a>, about basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, &amp; regularization.</li><li><a href="https://braininspired.co/podcast/nma-5/">Fifth panel</a>, about “doing more with fewer parameters: Convnets, RNNs, attention &amp; transformers, generative models (VAEs &amp; GANs).</li><li><a href="https://braininspired.co/podcast/nma-6/">Sixth panel</a>, about advanced topics in deep learning: unsupervised &amp; self-supervised learning, reinforcement learning, continual learning/causality.</li></ul>]]></description>
	<itunes:subtitle><![CDATA[Panelists:



Adrienne Fairhall.@alfairhall.Bing Brunton.@bingbrunton.Kanaka Rajan.@rajankdr.BI 054 Kanaka Rajan: How Do We Switch Behaviors?



This is the second in a series of panel discussions in collaboration with Neuromatch Academy, the online comp]]></itunes:subtitle>
	<content:encoded><![CDATA[<p>Panelists:</p>



<ul class="wp-block-list"><li><a href="https://fairhalllab.com/">Adrienne Fairhall</a>.<ul><li><a href="https://twitter.com/alfairhall">@alfairhall</a>.</li></ul></li><li><a href="http://bingbrunton.com">Bing Brunton</a>.<ul><li><a href="https://twitter.com/bingbrunton">@bingbrunton</a>.</li></ul></li><li><a href="https://www.rajanlab.com/">Kanaka Rajan</a>.<ul><li><a href="https://twitter.com/rajankdr?lang=en">@rajankdr</a>.</li><li><a href="https://braininspired.co/podcast/54/">BI 054 Kanaka Rajan: How Do We Switch Behaviors?</a></li></ul></li></ul>



<p>This is the second in a series of panel discussions in collaboration with <a href="https://academy.neuromatch.io/home">Neuromatch Academy</a>, the online computational neuroscience summer school. In this episode, the panelists discuss their experiences with linear systems, real neurons, and dynamic networks. </p>



<p>Other panels:</p>



<ul class="wp-block-list"><li> <a href="https://braininspired.co/podcast/nma-1/">First panel</a>, about model fitting, GLMs/machine learning, dimensionality reduction, and deep learning.</li><li><a href="https://braininspired.co/podcast/nma-3/">Third panel</a>, about stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality.</li><li><a href="https://braininspired.co/podcast/nma-4/">Fourth panel</a>, about basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, &amp; regularization.</li><li><a href="https://braininspired.co/podcast/nma-5/">Fifth panel</a>, about “doing more with fewer parameters: Convnets, RNNs, attention &amp; transformers, generative models (VAEs &amp; GANs).</li><li><a href="https://braininspired.co/podcast/nma-6/">Sixth panel</a>, about advanced topics in deep learning: unsupervised &amp; self-supervised learning, reinforcement learning, continual learning/causality.</li></ul>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1254/nma-2.mp3" length="72755739" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Panelists:



Adrienne Fairhall.@alfairhall.Bing Brunton.@bingbrunton.Kanaka Rajan.@rajankdr.BI 054 Kanaka Rajan: How Do We Switch Behaviors?



This is the second in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. In this episode, the panelists discuss their experiences with linear systems, real neurons, and dynamic networks. 



Other panels:



 First panel, about model fitting, GLMs/machine learning, dimensionality reduction, and deep learning.Third panel, about stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality.Fourth panel, about basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, &amp; regularization.Fifth panel, about “doing more with fewer parameters: Convnets, RNNs, attention &amp; transformers, generative models (VAEs &amp; GANs).Sixth panel, about advanced topics in deep learning: unsupervised &amp; self-supervised learning, reinforcement learning, continual learning/causality.]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2021/07/art-nma-2-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:15:28</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Panelists:



Adrienne Fairhall.@alfairhall.Bing Brunton.@bingbrunton.Kanaka Rajan.@rajankdr.BI 054 Kanaka Rajan: How Do We Switch Behaviors?



This is the second in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. In this episode, the panelists discuss their experiences with linear systems, real neurons, and dynamic networks. 



Other panels:



 First panel, about model fitting, GLMs/machine learning, dimensionality reduction, and deep learning.Third panel, about stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality.Fourth panel, about basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, &amp; regularization.Fifth panel, about “doing more with fewer parameters: Convnets, RNNs, attention &amp; transformers, generative models (VAEs &amp; GANs).Sixth panel, about advanced topics in deep learning: unsup]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2021/07/art-nma-2-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI NMA 01: Machine Learning Panel</title>
	<link>https://braininspired.co/podcast/nma-1/</link>
	<pubDate>Mon, 12 Jul 2021 11:48:13 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1252</guid>
	<description><![CDATA[<p>Panelists:</p>



<ul class="wp-block-list"><li><a href="https://www.lim.bio/">Athena Akrami</a>: <a href="https://twitter.com/AthenaAkrami">@AthenaAkrami</a>.</li><li><a href="https://www.seas.harvard.edu/person/demba-ba">Demba Ba</a>.</li><li><a href="http://compneurosci.com/">Gunnar Blohm</a>: <a href="https://twitter.com/GunnarBlohm">@GunnarBlohm</a>.</li><li><a href="https://www.psy.pku.edu.cn/english/people/faculty/professor/kunlinwei/index.htm">Kunlin Wei</a>.</li></ul>



<p>This is the first in a series of panel discussions in collaboration with <a rel="noreferrer noopener" href="https://academy.neuromatch.io/home" target="_blank">Neuromatch Academy</a>, the online computational neuroscience summer school. In this episode, the panelists discuss their experiences with model fitting, GLMs/machine learning, dimensionality reduction, and deep learning.</p>



<p>Other panels:</p>



<ul class="wp-block-list"><li><a href="https://braininspired.co/podcast/nma-2/">Second panel</a>, about linear systems, real neurons, and dynamic networks.</li><li><a href="https://braininspired.co/podcast/nma-3/">Third panel</a>, about stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality.</li><li><a href="https://braininspired.co/podcast/nma-4/">Fourth panel</a>, about basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, &amp; regularization.</li><li><a href="https://braininspired.co/podcast/nma-5/">Fifth panel</a>, about “doing more with fewer parameters: Convnets, RNNs, attention &amp; transformers, generative models (VAEs &amp; GANs).</li><li><a href="https://braininspired.co/podcast/nma-6/">Sixth panel</a>, about advanced topics in deep learning: unsupervised &amp; self-supervised learning, reinforcement learning, continual learning/causality.</li></ul>]]></description>
	<itunes:subtitle><![CDATA[Panelists:



Athena Akrami: @AthenaAkrami.Demba Ba.Gunnar Blohm: @GunnarBlohm.Kunlin Wei.



This is the first in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. In this episod]]></itunes:subtitle>
	<content:encoded><![CDATA[<p>Panelists:</p>



<ul class="wp-block-list"><li><a href="https://www.lim.bio/">Athena Akrami</a>: <a href="https://twitter.com/AthenaAkrami">@AthenaAkrami</a>.</li><li><a href="https://www.seas.harvard.edu/person/demba-ba">Demba Ba</a>.</li><li><a href="http://compneurosci.com/">Gunnar Blohm</a>: <a href="https://twitter.com/GunnarBlohm">@GunnarBlohm</a>.</li><li><a href="https://www.psy.pku.edu.cn/english/people/faculty/professor/kunlinwei/index.htm">Kunlin Wei</a>.</li></ul>



<p>This is the first in a series of panel discussions in collaboration with <a rel="noreferrer noopener" href="https://academy.neuromatch.io/home" target="_blank">Neuromatch Academy</a>, the online computational neuroscience summer school. In this episode, the panelists discuss their experiences with model fitting, GLMs/machine learning, dimensionality reduction, and deep learning.</p>



<p>Other panels:</p>



<ul class="wp-block-list"><li><a href="https://braininspired.co/podcast/nma-2/">Second panel</a>, about linear systems, real neurons, and dynamic networks.</li><li><a href="https://braininspired.co/podcast/nma-3/">Third panel</a>, about stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality.</li><li><a href="https://braininspired.co/podcast/nma-4/">Fourth panel</a>, about basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, &amp; regularization.</li><li><a href="https://braininspired.co/podcast/nma-5/">Fifth panel</a>, about “doing more with fewer parameters: Convnets, RNNs, attention &amp; transformers, generative models (VAEs &amp; GANs).</li><li><a href="https://braininspired.co/podcast/nma-6/">Sixth panel</a>, about advanced topics in deep learning: unsupervised &amp; self-supervised learning, reinforcement learning, continual learning/causality.</li></ul>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1252/nma-1.mp3" length="84017229" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Panelists:



Athena Akrami: @AthenaAkrami.Demba Ba.Gunnar Blohm: @GunnarBlohm.Kunlin Wei.



This is the first in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. In this episode, the panelists discuss their experiences with model fitting, GLMs/machine learning, dimensionality reduction, and deep learning.



Other panels:



Second panel, about linear systems, real neurons, and dynamic networks.Third panel, about stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality.Fourth panel, about basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, &amp; regularization.Fifth panel, about “doing more with fewer parameters: Convnets, RNNs, attention &amp; transformers, generative models (VAEs &amp; GANs).Sixth panel, about advanced topics in deep learning: unsupervised &amp; self-supervised learning, reinforcement learning, continual learning/causality.]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2021/07/art-nma-1-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:27:12</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Panelists:



Athena Akrami: @AthenaAkrami.Demba Ba.Gunnar Blohm: @GunnarBlohm.Kunlin Wei.



This is the first in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. In this episode, the panelists discuss their experiences with model fitting, GLMs/machine learning, dimensionality reduction, and deep learning.



Other panels:



Second panel, about linear systems, real neurons, and dynamic networks.Third panel, about stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality.Fourth panel, about basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, &amp; regularization.Fifth panel, about “doing more with fewer parameters: Convnets, RNNs, attention &amp; transformers, generative models (VAEs &amp; GANs).Sixth panel, about advanced topics in deep learning: unsupervised &amp; self-supervised learning, reinforcement]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2021/07/art-nma-1-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 110 Catherine Stinson and Jessica Thompson: Neuro-AI Explanation</title>
	<link>https://braininspired.co/podcast/110/</link>
	<pubDate>Tue, 06 Jul 2021 20:38:41 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1246</guid>
	<description><![CDATA[<p>Catherine, Jess, and I use some of the ideas from their recent papers to discuss how different types of explanations in neuroscience and AI could be unified into explanations of intelligence, natural or artificial. Catherine has written about how models are related to the target system they are built to explain. She suggests both the model and the target system should be considered as instantiations of a specific kind of phenomenon, and explanation is a product of relating the model and the target system to that specific aspect they both share. Jess has suggested we shift our focus of explanation from objects - like a brain area or a deep learning model - to the shared class of phenomenon performed by those objects. Doing so may help bridge the gap between the different forms of explanation currently used in neuroscience and AI. We also discuss Henk de Regt's conception of scientific understanding and its relation to explanation (they're different!), and plenty more.</p>





<ul class="wp-block-list"><li><a href="https://www.catherinestinson.ca/">Catherine's website</a>.</li><li><a href="https://thompsonj.github.io/discussion-excerpt">Jessica's blog</a>.</li><li>Twitter: Jess: <a href="https://twitter.com/tsonj">@tsonj</a>.</li><li>Related papers<ul><li><a href="https://www.catherinestinson.ca/Files/Papers/Artificial_Neurons_preprint.pdf">From Implausible Artificial Neurons to Idealized Cognitive Models: Rebooting Philosophy of Artificial Intelligence</a> - Catherine</li><li><a href="https://psyarxiv.com/5g3pn">Forms of explanation and understanding for neuroscience and artificial intelligence</a> - Jess</li></ul></li><li>Jess is a postdoc in Chris Summerfield's lab, and <a href="https://braininspired.co/podcast/95/">Chris and San Gershman were on a recent episode</a>.</li><li><a href="https://www.amazon.com/gp/product/0197510264/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=0197510264&amp;linkId=39d73128189707346c55b6f16d794aad">Understanding Scientific Understanding</a> by Henk de Regt.</li></ul>



<p>Timestamps:
0:00 - Intro
11:11 - Background and approaches
27:00 - Understanding distinct from explanation
36:00 - Explanations as programs (early explanation)
40:42 - Explaining classes of phenomena
52:05 - Constitutive (neuro) vs. etiological (AI) explanations
1:04:04 - Do nonphysical objects count for explanation?
1:10:51 - Advice for early philosopher/scientists</p>]]></description>
	<itunes:subtitle><![CDATA[Catherine, Jess, and I use some of the ideas from their recent papers to discuss how different types of explanations in neuroscience and AI could be unified into explanations of intelligence, natural or artificial. Catherine has written about how models ]]></itunes:subtitle>
	<content:encoded><![CDATA[<p>Catherine, Jess, and I use some of the ideas from their recent papers to discuss how different types of explanations in neuroscience and AI could be unified into explanations of intelligence, natural or artificial. Catherine has written about how models are related to the target system they are built to explain. She suggests both the model and the target system should be considered as instantiations of a specific kind of phenomenon, and explanation is a product of relating the model and the target system to that specific aspect they both share. Jess has suggested we shift our focus of explanation from objects - like a brain area or a deep learning model - to the shared class of phenomenon performed by those objects. Doing so may help bridge the gap between the different forms of explanation currently used in neuroscience and AI. We also discuss Henk de Regt's conception of scientific understanding and its relation to explanation (they're different!), and plenty more.</p>





<ul class="wp-block-list"><li><a href="https://www.catherinestinson.ca/">Catherine's website</a>.</li><li><a href="https://thompsonj.github.io/discussion-excerpt">Jessica's blog</a>.</li><li>Twitter: Jess: <a href="https://twitter.com/tsonj">@tsonj</a>.</li><li>Related papers<ul><li><a href="https://www.catherinestinson.ca/Files/Papers/Artificial_Neurons_preprint.pdf">From Implausible Artificial Neurons to Idealized Cognitive Models: Rebooting Philosophy of Artificial Intelligence</a> - Catherine</li><li><a href="https://psyarxiv.com/5g3pn">Forms of explanation and understanding for neuroscience and artificial intelligence</a> - Jess</li></ul></li><li>Jess is a postdoc in Chris Summerfield's lab, and <a href="https://braininspired.co/podcast/95/">Chris and San Gershman were on a recent episode</a>.</li><li><a href="https://www.amazon.com/gp/product/0197510264/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=0197510264&amp;linkId=39d73128189707346c55b6f16d794aad">Understanding Scientific Understanding</a> by Henk de Regt.</li></ul>



<p>Timestamps:
0:00 - Intro
11:11 - Background and approaches
27:00 - Understanding distinct from explanation
36:00 - Explanations as programs (early explanation)
40:42 - Explaining classes of phenomena
52:05 - Constitutive (neuro) vs. etiological (AI) explanations
1:04:04 - Do nonphysical objects count for explanation?
1:10:51 - Advice for early philosopher/scientists</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1246/110.mp3" length="81930159" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Catherine, Jess, and I use some of the ideas from their recent papers to discuss how different types of explanations in neuroscience and AI could be unified into explanations of intelligence, natural or artificial. Catherine has written about how models are related to the target system they are built to explain. She suggests both the model and the target system should be considered as instantiations of a specific kind of phenomenon, and explanation is a product of relating the model and the target system to that specific aspect they both share. Jess has suggested we shift our focus of explanation from objects - like a brain area or a deep learning model - to the shared class of phenomenon performed by those objects. Doing so may help bridge the gap between the different forms of explanation currently used in neuroscience and AI. We also discuss Henk de Regt's conception of scientific understanding and its relation to explanation (they're different!), and plenty more.





Catherine's website.Jessica's blog.Twitter: Jess: @tsonj.Related papersFrom Implausible Artificial Neurons to Idealized Cognitive Models: Rebooting Philosophy of Artificial Intelligence - CatherineForms of explanation and understanding for neuroscience and artificial intelligence - JessJess is a postdoc in Chris Summerfield's lab, and Chris and San Gershman were on a recent episode.Understanding Scientific Understanding by Henk de Regt.



Timestamps:
0:00 - Intro
11:11 - Background and approaches
27:00 - Understanding distinct from explanation
36:00 - Explanations as programs (early explanation)
40:42 - Explaining classes of phenomena
52:05 - Constitutive (neuro) vs. etiological (AI) explanations
1:04:04 - Do nonphysical objects count for explanation?
1:10:51 - Advice for early philosopher/scientists]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2021/07/art-stinson-thompson-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:25:02</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Catherine, Jess, and I use some of the ideas from their recent papers to discuss how different types of explanations in neuroscience and AI could be unified into explanations of intelligence, natural or artificial. Catherine has written about how models are related to the target system they are built to explain. She suggests both the model and the target system should be considered as instantiations of a specific kind of phenomenon, and explanation is a product of relating the model and the target system to that specific aspect they both share. Jess has suggested we shift our focus of explanation from objects - like a brain area or a deep learning model - to the shared class of phenomenon performed by those objects. Doing so may help bridge the gap between the different forms of explanation currently used in neuroscience and AI. We also discuss Henk de Regt's conception of scientific understanding and its relation to explanation (they're different!), and plenty more.





Catherine's ]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2021/07/art-stinson-thompson-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 109 Mark Bickhard: Interactivism</title>
	<link>https://braininspired.co/podcast/109/</link>
	<pubDate>Sat, 26 Jun 2021 16:57:31 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1243</guid>
	<description><![CDATA[<p>Mark and I discuss a wide range of topics surrounding his Interactivism framework for explaining cognition. Interactivism stems from Mark's account of representations and how what we represent in our minds is related to the external world - a challenge that has plagued the mind-body problem since the beginning. Basically, representations are anticipated interactions with the world, that can be true (if enacting one helps an organism maintain its thermodynamic relation with the world) or false (if it doesn't). And representations are functional, in that they function to maintain far from equilibrium thermodynamics for the organism for self-maintenance. Over the years, Mark has filled out Interactivism, starting with a process metaphysics foundation and building from there to account for representations, how our brains might implement representations, and why AI is hindered by our modern "encoding" version of representation. We also compare interactivism to other similar frameworks, like enactivism, predictive processing, and the free energy principle.</p>



<p>
For related discussions on the foundations (and issues of) representations, check out <a href="https://braininspired.co/podcast/60/">episode 60 with Michael Rescorla</a>, <a href="https://braininspired.co/podcast/61/">episode 61 with Jörn Diedrichsen and Niko Kriegeskorte</a>, and especially <a href="https://braininspired.co/podcast/79/">episode 79 with Romain Brette</a>.</p>



<ul class="wp-block-list"><li><a href="https://www.lehigh.edu/~mhb0/mhb0.html">Mark's website</a>.</li><li>Related papers<ul><li><a href="http://www.lehigh.edu/~mhb0/InteractivismManifesto.pdf">Interactivism: A manifesto</a>.</li><li>Plenty of <a href="https://www.lehigh.edu/~mhb0/pubspage.html">other papers available</a> via his website.</li></ul></li><li>Also mentioned:<ul><li><a href="https://www.amazon.com/gp/product/B01A0BGRJ6/ref=as_li_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;camp=1789&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=B01A0BGRJ6&amp;linkId=c9e5de9ee873e3d3c12adca0d484b0e6">The First Half Second The Microgenesis and Temporal Dynamics of Unconscious and Conscious Visual Processes</a>. 2006, Haluk Ögmen, Bruno G. Breitmeyer</li><li><a href="https://www.urmc.rochester.edu/labs/nedergaard.aspx">Maiken Nedergaard</a>'s <a href="https://neuroscience.stanford.edu/videos/nightlife-brain">work on sleep</a>.</li></ul></li></ul>



<p>Timestamps
0:00 - Intro
5:06 - Previous and upcoming book
9:17 - Origins of Mark's thinking
14:31 - Process vs. substance metaphysics
27:10 - Kinds of emergence
32:16 - Normative emergence to normative function and representation
36:33 - Representation in Interactivism
46:07 - Situation knowledge
54:02 - Interactivism vs. Enactivism
1:09:37 - Interactivism vs Predictive/Bayesian brain
1:17:39 - Interactivism vs. Free energy principle
1:21:56 - Microgenesis
1:33:11 - Implications for neuroscience
1:38:18 - Learning as variation and selection
1:45:07 - Implications for AI
1:55:06 - Everything is a clock
1:58:14 - Is Mark a philosopher?</p>]]></description>
	<itunes:subtitle><![CDATA[Mark and I discuss a wide range of topics surrounding his Interactivism framework for explaining cognition. Interactivism stems from Marks account of representations and how what we represent in our minds is related to the external world - a challenge th]]></itunes:subtitle>
	<content:encoded><![CDATA[<p>Mark and I discuss a wide range of topics surrounding his Interactivism framework for explaining cognition. Interactivism stems from Mark's account of representations and how what we represent in our minds is related to the external world - a challenge that has plagued the mind-body problem since the beginning. Basically, representations are anticipated interactions with the world, that can be true (if enacting one helps an organism maintain its thermodynamic relation with the world) or false (if it doesn't). And representations are functional, in that they function to maintain far from equilibrium thermodynamics for the organism for self-maintenance. Over the years, Mark has filled out Interactivism, starting with a process metaphysics foundation and building from there to account for representations, how our brains might implement representations, and why AI is hindered by our modern "encoding" version of representation. We also compare interactivism to other similar frameworks, like enactivism, predictive processing, and the free energy principle.</p>



<p>
For related discussions on the foundations (and issues of) representations, check out <a href="https://braininspired.co/podcast/60/">episode 60 with Michael Rescorla</a>, <a href="https://braininspired.co/podcast/61/">episode 61 with Jörn Diedrichsen and Niko Kriegeskorte</a>, and especially <a href="https://braininspired.co/podcast/79/">episode 79 with Romain Brette</a>.</p>



<ul class="wp-block-list"><li><a href="https://www.lehigh.edu/~mhb0/mhb0.html">Mark's website</a>.</li><li>Related papers<ul><li><a href="http://www.lehigh.edu/~mhb0/InteractivismManifesto.pdf">Interactivism: A manifesto</a>.</li><li>Plenty of <a href="https://www.lehigh.edu/~mhb0/pubspage.html">other papers available</a> via his website.</li></ul></li><li>Also mentioned:<ul><li><a href="https://www.amazon.com/gp/product/B01A0BGRJ6/ref=as_li_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;camp=1789&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=B01A0BGRJ6&amp;linkId=c9e5de9ee873e3d3c12adca0d484b0e6">The First Half Second The Microgenesis and Temporal Dynamics of Unconscious and Conscious Visual Processes</a>. 2006, Haluk Ögmen, Bruno G. Breitmeyer</li><li><a href="https://www.urmc.rochester.edu/labs/nedergaard.aspx">Maiken Nedergaard</a>'s <a href="https://neuroscience.stanford.edu/videos/nightlife-brain">work on sleep</a>.</li></ul></li></ul>



<p>Timestamps
0:00 - Intro
5:06 - Previous and upcoming book
9:17 - Origins of Mark's thinking
14:31 - Process vs. substance metaphysics
27:10 - Kinds of emergence
32:16 - Normative emergence to normative function and representation
36:33 - Representation in Interactivism
46:07 - Situation knowledge
54:02 - Interactivism vs. Enactivism
1:09:37 - Interactivism vs Predictive/Bayesian brain
1:17:39 - Interactivism vs. Free energy principle
1:21:56 - Microgenesis
1:33:11 - Implications for neuroscience
1:38:18 - Learning as variation and selection
1:45:07 - Implications for AI
1:55:06 - Everything is a clock
1:58:14 - Is Mark a philosopher?</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1243/109.mp3" length="119077909" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Mark and I discuss a wide range of topics surrounding his Interactivism framework for explaining cognition. Interactivism stems from Mark's account of representations and how what we represent in our minds is related to the external world - a challenge that has plagued the mind-body problem since the beginning. Basically, representations are anticipated interactions with the world, that can be true (if enacting one helps an organism maintain its thermodynamic relation with the world) or false (if it doesn't). And representations are functional, in that they function to maintain far from equilibrium thermodynamics for the organism for self-maintenance. Over the years, Mark has filled out Interactivism, starting with a process metaphysics foundation and building from there to account for representations, how our brains might implement representations, and why AI is hindered by our modern "encoding" version of representation. We also compare interactivism to other similar frameworks, like enactivism, predictive processing, and the free energy principle.




For related discussions on the foundations (and issues of) representations, check out episode 60 with Michael Rescorla, episode 61 with Jörn Diedrichsen and Niko Kriegeskorte, and especially episode 79 with Romain Brette.



Mark's website.Related papersInteractivism: A manifesto.Plenty of other papers available via his website.Also mentioned:The First Half Second The Microgenesis and Temporal Dynamics of Unconscious and Conscious Visual Processes. 2006, Haluk Ögmen, Bruno G. BreitmeyerMaiken Nedergaard's work on sleep.



Timestamps
0:00 - Intro
5:06 - Previous and upcoming book
9:17 - Origins of Mark's thinking
14:31 - Process vs. substance metaphysics
27:10 - Kinds of emergence
32:16 - Normative emergence to normative function and representation
36:33 - Representation in Interactivism
46:07 - Situation knowledge
54:02 - Interactivism vs. Enactivism
1:09:37 - Interactivism vs Predictive/Bayesian brain
1:17:39 - Interactivism vs. Free energy principle
1:21:56 - Microgenesis
1:33:11 - Implications for neuroscience
1:38:18 - Learning as variation and selection
1:45:07 - Implications for AI
1:55:06 - Everything is a clock
1:58:14 - Is Mark a philosopher?]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2021/06/art-bickhard-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>02:03:43</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Mark and I discuss a wide range of topics surrounding his Interactivism framework for explaining cognition. Interactivism stems from Mark's account of representations and how what we represent in our minds is related to the external world - a challenge that has plagued the mind-body problem since the beginning. Basically, representations are anticipated interactions with the world, that can be true (if enacting one helps an organism maintain its thermodynamic relation with the world) or false (if it doesn't). And representations are functional, in that they function to maintain far from equilibrium thermodynamics for the organism for self-maintenance. Over the years, Mark has filled out Interactivism, starting with a process metaphysics foundation and building from there to account for representations, how our brains might implement representations, and why AI is hindered by our modern "encoding" version of representation. We also compare interactivism to other similar frameworks, lik]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2021/06/art-bickhard-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 108 Grace Lindsay: Models of the Mind</title>
	<link>https://braininspired.co/podcast/108/</link>
	<pubDate>Wed, 16 Jun 2021 19:56:49 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1240</guid>
	<description><![CDATA[<ul class="wp-block-list"><li><a href="https://gracewlindsay.com/">Grace's website</a></li><li>Twitter:&nbsp;<a href="https://twitter.com/neurograce">@neurograce</a>.</li><li><a href="https://www.amazon.com/gp/product/1472966422/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=1472966422&amp;linkId=cf554d0eccb5e8485b7e414c571f1ec7">Models of the Mind: How Physics, Engineering and Mathematics Have Shaped Our Understanding of the Brain</a>.</li><li>We talked about Grace's work using convolutional neural networks to study vision and attention <a href="https://braininspired.co/podcast/11/">way back on episode 11</a>.</li></ul>





<p>Grace and I discuss her new book <a href="https://www.amazon.com/gp/product/1472966422/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=1472966422&amp;linkId=cf554d0eccb5e8485b7e414c571f1ec7">Models of the Mind</a>, about the blossoming and conceptual foundations of the computational approach to study minds and brains. Each chapter of the book focuses on one major topic and provides historical context, the major concepts that connect models to brain functions, and the current landscape of related research endeavors. We cover a handful of those during the episode, including the birth of AI, the difference between math in physics and neuroscience, determining the neural code and how Shannon information theory plays a role, whether it's possible to guess a brain function based on what we know about some brain structure, "grand unified theories" of the brain. We also digress and explore topics beyond the book. </p>



<p>Timestamps
0:00 - Intro
4:19 - Cognition beyond vision
12:38 - Models of the Mind - book overview
14:00 - The good and bad of using math
21:33 - I quiz Grace on her own book
25:03 - Birth of AI and computational approach
38:00 - Rediscovering old math for new neuroscience
41:00 - Topology as good math to know now
45:29 - Physics vs. neuroscience math
49:32 - Neural code and information theory
55:03 - Rate code vs. timing code
59:18 - Graph theory - can you deduce function from structure?
1:06:56 - Multiple realizability
1:13:01 - Grand Unified theories of the brain</p>]]></description>
	<itunes:subtitle><![CDATA[Graces websiteTwitter:&nbsp;@neurograce.Models of the Mind: How Physics, Engineering and Mathematics Have Shaped Our Understanding of the Brain.We talked about Graces work using convolutional neural networks to study vision and attention way back on epis]]></itunes:subtitle>
	<content:encoded><![CDATA[<ul class="wp-block-list"><li><a href="https://gracewlindsay.com/">Grace's website</a></li><li>Twitter:&nbsp;<a href="https://twitter.com/neurograce">@neurograce</a>.</li><li><a href="https://www.amazon.com/gp/product/1472966422/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=1472966422&amp;linkId=cf554d0eccb5e8485b7e414c571f1ec7">Models of the Mind: How Physics, Engineering and Mathematics Have Shaped Our Understanding of the Brain</a>.</li><li>We talked about Grace's work using convolutional neural networks to study vision and attention <a href="https://braininspired.co/podcast/11/">way back on episode 11</a>.</li></ul>





<p>Grace and I discuss her new book <a href="https://www.amazon.com/gp/product/1472966422/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=1472966422&amp;linkId=cf554d0eccb5e8485b7e414c571f1ec7">Models of the Mind</a>, about the blossoming and conceptual foundations of the computational approach to study minds and brains. Each chapter of the book focuses on one major topic and provides historical context, the major concepts that connect models to brain functions, and the current landscape of related research endeavors. We cover a handful of those during the episode, including the birth of AI, the difference between math in physics and neuroscience, determining the neural code and how Shannon information theory plays a role, whether it's possible to guess a brain function based on what we know about some brain structure, "grand unified theories" of the brain. We also digress and explore topics beyond the book. </p>



<p>Timestamps
0:00 - Intro
4:19 - Cognition beyond vision
12:38 - Models of the Mind - book overview
14:00 - The good and bad of using math
21:33 - I quiz Grace on her own book
25:03 - Birth of AI and computational approach
38:00 - Rediscovering old math for new neuroscience
41:00 - Topology as good math to know now
45:29 - Physics vs. neuroscience math
49:32 - Neural code and information theory
55:03 - Rate code vs. timing code
59:18 - Graph theory - can you deduce function from structure?
1:06:56 - Multiple realizability
1:13:01 - Grand Unified theories of the brain</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1240/108.mp3" length="83062339" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Grace's websiteTwitter:&nbsp;@neurograce.Models of the Mind: How Physics, Engineering and Mathematics Have Shaped Our Understanding of the Brain.We talked about Grace's work using convolutional neural networks to study vision and attention way back on episode 11.





Grace and I discuss her new book Models of the Mind, about the blossoming and conceptual foundations of the computational approach to study minds and brains. Each chapter of the book focuses on one major topic and provides historical context, the major concepts that connect models to brain functions, and the current landscape of related research endeavors. We cover a handful of those during the episode, including the birth of AI, the difference between math in physics and neuroscience, determining the neural code and how Shannon information theory plays a role, whether it's possible to guess a brain function based on what we know about some brain structure, "grand unified theories" of the brain. We also digress and explore topics beyond the book. 



Timestamps
0:00 - Intro
4:19 - Cognition beyond vision
12:38 - Models of the Mind - book overview
14:00 - The good and bad of using math
21:33 - I quiz Grace on her own book
25:03 - Birth of AI and computational approach
38:00 - Rediscovering old math for new neuroscience
41:00 - Topology as good math to know now
45:29 - Physics vs. neuroscience math
49:32 - Neural code and information theory
55:03 - Rate code vs. timing code
59:18 - Graph theory - can you deduce function from structure?
1:06:56 - Multiple realizability
1:13:01 - Grand Unified theories of the brain]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2021/06/art-lindsay-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:26:12</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Grace's websiteTwitter:&nbsp;@neurograce.Models of the Mind: How Physics, Engineering and Mathematics Have Shaped Our Understanding of the Brain.We talked about Grace's work using convolutional neural networks to study vision and attention way back on episode 11.





Grace and I discuss her new book Models of the Mind, about the blossoming and conceptual foundations of the computational approach to study minds and brains. Each chapter of the book focuses on one major topic and provides historical context, the major concepts that connect models to brain functions, and the current landscape of related research endeavors. We cover a handful of those during the episode, including the birth of AI, the difference between math in physics and neuroscience, determining the neural code and how Shannon information theory plays a role, whether it's possible to guess a brain function based on what we know about some brain structure, "grand unified theories" of the brain. We also digress and explo]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2021/06/art-lindsay-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 107 Steve Fleming: Know Thyself</title>
	<link>https://braininspired.co/podcast/107/</link>
	<pubDate>Sun, 06 Jun 2021 18:38:31 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1237</guid>
	<description><![CDATA[<p>Steve and I discuss many topics from his new book <a href="https://www.amazon.com/gp/product/1541672844/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=1541672844&amp;linkId=087d7f9a45c4b8ac8166ec2d946236e3">Know Thyself: The Science of Self-Awareness</a>. The book covers the full range of what we know about metacognition and self-awareness, including how brains might underlie metacognitive behavior, computational models to explain mechanisms of metacognition, how and why self-awareness evolved, which animals beyond humans harbor metacognition and how to test it, its role and potential origins in theory of mind and social interaction, how our metacognitive skills develop over our lifetimes, what our metacognitive skill tells us about our other psychological traits, and so on. We also discuss what it might look like when we are able to build metacognitive AI, and whether that's even a good idea.</p>





<ul class="wp-block-list"><li>Steve's lab: <a href="https://metacoglab.org/">The MetaLab</a>.</li><li>Twitter: <a href="https://twitter.com/smfleming">@smfleming</a>.</li><li><a href="https://braininspired.co/podcast/99/">Steve and Hakwan Lau on episode 99 about consciousness</a>.&nbsp;</li><li>Papers:<ul><li>Metacognitive training: <a href="https://static1.squarespace.com/static/5616b377e4b0670f148ff742/t/5d0f80760ff3ce00011762f1/1561297019002/CarpenterJEPG2019.pdf">Domain-General Enhancements of Metacognitive Ability Through Adaptive Training</a></li></ul></li><li>The book:<ul><li><a href="https://www.amazon.com/gp/product/1541672844/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=1541672844&amp;linkId=087d7f9a45c4b8ac8166ec2d946236e3">Know Thyself: The Science of Self-Awareness</a>.</li></ul></li></ul>



<p>Timestamps
0:00 - Intro
3:25 - Steve's Career
10:43 - Sub-personal vs. personal metacognition
17:55 - Meditation and metacognition
20:51 - Replay tools for mind-wandering
30:56 - Evolutionary cultural origins of self-awareness
45:02 - Animal metacognition
54:25 - Aging and self-awareness
58:32 - Is more always better?
1:00:41 - Political dogmatism and overconfidence
1:08:56 - Reliance on AI
1:15:15 - Building self-aware AI
1:23:20 - Future evolution of metacognition</p>]]></description>
	<itunes:subtitle><![CDATA[Steve and I discuss many topics from his new book Know Thyself: The Science of Self-Awareness. The book covers the full range of what we know about metacognition and self-awareness, including how brains might underlie metacognitive behavior, computationa]]></itunes:subtitle>
	<content:encoded><![CDATA[<p>Steve and I discuss many topics from his new book <a href="https://www.amazon.com/gp/product/1541672844/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=1541672844&amp;linkId=087d7f9a45c4b8ac8166ec2d946236e3">Know Thyself: The Science of Self-Awareness</a>. The book covers the full range of what we know about metacognition and self-awareness, including how brains might underlie metacognitive behavior, computational models to explain mechanisms of metacognition, how and why self-awareness evolved, which animals beyond humans harbor metacognition and how to test it, its role and potential origins in theory of mind and social interaction, how our metacognitive skills develop over our lifetimes, what our metacognitive skill tells us about our other psychological traits, and so on. We also discuss what it might look like when we are able to build metacognitive AI, and whether that's even a good idea.</p>





<ul class="wp-block-list"><li>Steve's lab: <a href="https://metacoglab.org/">The MetaLab</a>.</li><li>Twitter: <a href="https://twitter.com/smfleming">@smfleming</a>.</li><li><a href="https://braininspired.co/podcast/99/">Steve and Hakwan Lau on episode 99 about consciousness</a>.&nbsp;</li><li>Papers:<ul><li>Metacognitive training: <a href="https://static1.squarespace.com/static/5616b377e4b0670f148ff742/t/5d0f80760ff3ce00011762f1/1561297019002/CarpenterJEPG2019.pdf">Domain-General Enhancements of Metacognitive Ability Through Adaptive Training</a></li></ul></li><li>The book:<ul><li><a href="https://www.amazon.com/gp/product/1541672844/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=1541672844&amp;linkId=087d7f9a45c4b8ac8166ec2d946236e3">Know Thyself: The Science of Self-Awareness</a>.</li></ul></li></ul>



<p>Timestamps
0:00 - Intro
3:25 - Steve's Career
10:43 - Sub-personal vs. personal metacognition
17:55 - Meditation and metacognition
20:51 - Replay tools for mind-wandering
30:56 - Evolutionary cultural origins of self-awareness
45:02 - Animal metacognition
54:25 - Aging and self-awareness
58:32 - Is more always better?
1:00:41 - Political dogmatism and overconfidence
1:08:56 - Reliance on AI
1:15:15 - Building self-aware AI
1:23:20 - Future evolution of metacognition</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1237/107.mp3" length="86123824" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Steve and I discuss many topics from his new book Know Thyself: The Science of Self-Awareness. The book covers the full range of what we know about metacognition and self-awareness, including how brains might underlie metacognitive behavior, computational models to explain mechanisms of metacognition, how and why self-awareness evolved, which animals beyond humans harbor metacognition and how to test it, its role and potential origins in theory of mind and social interaction, how our metacognitive skills develop over our lifetimes, what our metacognitive skill tells us about our other psychological traits, and so on. We also discuss what it might look like when we are able to build metacognitive AI, and whether that's even a good idea.





Steve's lab: The MetaLab.Twitter: @smfleming.Steve and Hakwan Lau on episode 99 about consciousness.&nbsp;Papers:Metacognitive training: Domain-General Enhancements of Metacognitive Ability Through Adaptive TrainingThe book:Know Thyself: The Science of Self-Awareness.



Timestamps
0:00 - Intro
3:25 - Steve's Career
10:43 - Sub-personal vs. personal metacognition
17:55 - Meditation and metacognition
20:51 - Replay tools for mind-wandering
30:56 - Evolutionary cultural origins of self-awareness
45:02 - Animal metacognition
54:25 - Aging and self-awareness
58:32 - Is more always better?
1:00:41 - Political dogmatism and overconfidence
1:08:56 - Reliance on AI
1:15:15 - Building self-aware AI
1:23:20 - Future evolution of metacognition]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2021/06/art-fleming-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:29:24</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Steve and I discuss many topics from his new book Know Thyself: The Science of Self-Awareness. The book covers the full range of what we know about metacognition and self-awareness, including how brains might underlie metacognitive behavior, computational models to explain mechanisms of metacognition, how and why self-awareness evolved, which animals beyond humans harbor metacognition and how to test it, its role and potential origins in theory of mind and social interaction, how our metacognitive skills develop over our lifetimes, what our metacognitive skill tells us about our other psychological traits, and so on. We also discuss what it might look like when we are able to build metacognitive AI, and whether that's even a good idea.





Steve's lab: The MetaLab.Twitter: @smfleming.Steve and Hakwan Lau on episode 99 about consciousness.&nbsp;Papers:Metacognitive training: Domain-General Enhancements of Metacognitive Ability Through Adaptive TrainingThe book:Know Thyself: The Scienc]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2021/06/art-fleming-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 106 Jacqueline Gottlieb and Robert Wilson: Deep Curiosity</title>
	<link>https://braininspired.co/podcast/106/</link>
	<pubDate>Thu, 27 May 2021 18:32:20 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1232</guid>
	<description><![CDATA[<p>Jackie and Bob discuss their research and thinking about curiosity. </p>



<p>Jackie's background is studying decision making and attention, recording neurons in nonhuman primates during eye movement tasks, and she's broadly interested in how we adapt our ongoing behavior. Curiosity is crucial for this, so she recently has focused on behavioral strategies to exercise curiosity, developing tasks that test exploration, information sampling, uncertainty reduction, and intrinsic motivation. </p>



<p>Bob's background is developing computational models of reinforcement learning (including the exploration-exploitation tradeoff) and decision making, and he behavior and neuroimaging data in humans to test the models. He's broadly interested in how and whether we can understand brains and cognition using mathematical models. Recently he's been working on a model for curiosity known as deep exploration, which suggests we make decisions by deeply simulating a handful of scenarios and choosing based on the simulation outcomes.</p>



<p>We also discuss how one should go about their career (qua curiosity), how eye movements compare with other windows into cognition, and whether we can and should create curious AI agents (Bob is an emphatic yes, and Jackie is slightly worried that will be the time to worry about AI).</p>





<ul class="wp-block-list"><li>Jackie's lab: <a href="https://www.gottlieblab.com/">Jacqueline Gottlieb Laboratory at Columbia University</a>.</li><li>Bob's lab: <a href="http://u.arizona.edu/~bob/index.html">Neuroscience of Reinforcement Learning and Decision Making</a>.</li><li>Twitter: Bob: <a href="https://twitter.com/NRDlab">@NRDLab</a> (Jackie's not on twitter).</li><li>Related papers<ul><li><a href="https://9fabbb78-5b5b-4a1b-aeb0-b5443b6eac2f.filesusr.com/ugd/9cf124_51965780868c4847bdcc2648c03d7671.pdf">Curiosity, information demand and attentional priority</a>.</li><li><a href="https://psyarxiv.com/e9azw">Balancing exploration and exploitation with information and randomization</a>.</li><li><a href="https://psyarxiv.com/uj85c">Deep exploration as a unifying account of explore-exploit behavior</a>.</li></ul></li><li>Bob mentions an influential talk by Benjamin Van Roy:<ul><li><a href="http://videolectures.net/rldm2015_van_roy_function_randomization/">Generalization and Exploration via Value Function Randomization</a>.</li></ul></li><li>Bob mentions his paper with Anne Collins:<ul><li><a href="https://elifesciences.org/articles/49547https://elifesciences.org/articles/49547">Ten simple rules for the computational modeling of behavioral data</a>.</li></ul></li></ul>



<p>Timestamps:</p>



<p>0:00 - Intro
4:15 - Central scientific interests
8:32 - Advent of mathematical models
12:15 - Career exploration vs. exploitation
28:03 - Eye movements and active sensing
35:53 - Status of eye movements in neuroscience
44:16 - Why are we curious?
50:26 - Curiosity vs. Exploration vs. Intrinsic motivation
1:02:35 - Directed vs. random exploration
1:06:16 - Deep exploration
1:12:52 - How to know what to pay attention to
1:19:49 - Does AI need curiosity?
1:26:29 - What trait do you wish you had more of?</p>]]></description>
	<itunes:subtitle><![CDATA[Jackie and Bob discuss their research and thinking about curiosity. 



Jackies background is studying decision making and attention, recording neurons in nonhuman primates during eye movement tasks, and shes broadly interested in how we adapt our ongoin]]></itunes:subtitle>
	<content:encoded><![CDATA[<p>Jackie and Bob discuss their research and thinking about curiosity. </p>



<p>Jackie's background is studying decision making and attention, recording neurons in nonhuman primates during eye movement tasks, and she's broadly interested in how we adapt our ongoing behavior. Curiosity is crucial for this, so she recently has focused on behavioral strategies to exercise curiosity, developing tasks that test exploration, information sampling, uncertainty reduction, and intrinsic motivation. </p>



<p>Bob's background is developing computational models of reinforcement learning (including the exploration-exploitation tradeoff) and decision making, and he behavior and neuroimaging data in humans to test the models. He's broadly interested in how and whether we can understand brains and cognition using mathematical models. Recently he's been working on a model for curiosity known as deep exploration, which suggests we make decisions by deeply simulating a handful of scenarios and choosing based on the simulation outcomes.</p>



<p>We also discuss how one should go about their career (qua curiosity), how eye movements compare with other windows into cognition, and whether we can and should create curious AI agents (Bob is an emphatic yes, and Jackie is slightly worried that will be the time to worry about AI).</p>





<ul class="wp-block-list"><li>Jackie's lab: <a href="https://www.gottlieblab.com/">Jacqueline Gottlieb Laboratory at Columbia University</a>.</li><li>Bob's lab: <a href="http://u.arizona.edu/~bob/index.html">Neuroscience of Reinforcement Learning and Decision Making</a>.</li><li>Twitter: Bob: <a href="https://twitter.com/NRDlab">@NRDLab</a> (Jackie's not on twitter).</li><li>Related papers<ul><li><a href="https://9fabbb78-5b5b-4a1b-aeb0-b5443b6eac2f.filesusr.com/ugd/9cf124_51965780868c4847bdcc2648c03d7671.pdf">Curiosity, information demand and attentional priority</a>.</li><li><a href="https://psyarxiv.com/e9azw">Balancing exploration and exploitation with information and randomization</a>.</li><li><a href="https://psyarxiv.com/uj85c">Deep exploration as a unifying account of explore-exploit behavior</a>.</li></ul></li><li>Bob mentions an influential talk by Benjamin Van Roy:<ul><li><a href="http://videolectures.net/rldm2015_van_roy_function_randomization/">Generalization and Exploration via Value Function Randomization</a>.</li></ul></li><li>Bob mentions his paper with Anne Collins:<ul><li><a href="https://elifesciences.org/articles/49547https://elifesciences.org/articles/49547">Ten simple rules for the computational modeling of behavioral data</a>.</li></ul></li></ul>



<p>Timestamps:</p>



<p>0:00 - Intro
4:15 - Central scientific interests
8:32 - Advent of mathematical models
12:15 - Career exploration vs. exploitation
28:03 - Eye movements and active sensing
35:53 - Status of eye movements in neuroscience
44:16 - Why are we curious?
50:26 - Curiosity vs. Exploration vs. Intrinsic motivation
1:02:35 - Directed vs. random exploration
1:06:16 - Deep exploration
1:12:52 - How to know what to pay attention to
1:19:49 - Does AI need curiosity?
1:26:29 - What trait do you wish you had more of?</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1232/106.mp3" length="88508517" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Jackie and Bob discuss their research and thinking about curiosity. 



Jackie's background is studying decision making and attention, recording neurons in nonhuman primates during eye movement tasks, and she's broadly interested in how we adapt our ongoing behavior. Curiosity is crucial for this, so she recently has focused on behavioral strategies to exercise curiosity, developing tasks that test exploration, information sampling, uncertainty reduction, and intrinsic motivation. 



Bob's background is developing computational models of reinforcement learning (including the exploration-exploitation tradeoff) and decision making, and he behavior and neuroimaging data in humans to test the models. He's broadly interested in how and whether we can understand brains and cognition using mathematical models. Recently he's been working on a model for curiosity known as deep exploration, which suggests we make decisions by deeply simulating a handful of scenarios and choosing based on the simulation outcomes.



We also discuss how one should go about their career (qua curiosity), how eye movements compare with other windows into cognition, and whether we can and should create curious AI agents (Bob is an emphatic yes, and Jackie is slightly worried that will be the time to worry about AI).





Jackie's lab: Jacqueline Gottlieb Laboratory at Columbia University.Bob's lab: Neuroscience of Reinforcement Learning and Decision Making.Twitter: Bob: @NRDLab (Jackie's not on twitter).Related papersCuriosity, information demand and attentional priority.Balancing exploration and exploitation with information and randomization.Deep exploration as a unifying account of explore-exploit behavior.Bob mentions an influential talk by Benjamin Van Roy:Generalization and Exploration via Value Function Randomization.Bob mentions his paper with Anne Collins:Ten simple rules for the computational modeling of behavioral data.



Timestamps:



0:00 - Intro
4:15 - Central scientific interests
8:32 - Advent of mathematical models
12:15 - Career exploration vs. exploitation
28:03 - Eye movements and active sensing
35:53 - Status of eye movements in neuroscience
44:16 - Why are we curious?
50:26 - Curiosity vs. Exploration vs. Intrinsic motivation
1:02:35 - Directed vs. random exploration
1:06:16 - Deep exploration
1:12:52 - How to know what to pay attention to
1:19:49 - Does AI need curiosity?
1:26:29 - What trait do you wish you had more of?]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2021/05/art-gottlieb-and-wilson-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:31:53</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Jackie and Bob discuss their research and thinking about curiosity. 



Jackie's background is studying decision making and attention, recording neurons in nonhuman primates during eye movement tasks, and she's broadly interested in how we adapt our ongoing behavior. Curiosity is crucial for this, so she recently has focused on behavioral strategies to exercise curiosity, developing tasks that test exploration, information sampling, uncertainty reduction, and intrinsic motivation. 



Bob's background is developing computational models of reinforcement learning (including the exploration-exploitation tradeoff) and decision making, and he behavior and neuroimaging data in humans to test the models. He's broadly interested in how and whether we can understand brains and cognition using mathematical models. Recently he's been working on a model for curiosity known as deep exploration, which suggests we make decisions by deeply simulating a handful of scenarios and choosing based on the s]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2021/05/art-gottlieb-and-wilson-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 105 Sanjeev Arora: Off the Convex Path</title>
	<link>https://braininspired.co/podcast/105/</link>
	<pubDate>Mon, 17 May 2021 13:58:43 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1229</guid>
	<description><![CDATA[<p>Sanjeev and I discuss some of the progress toward understanding how deep learning works, specially under previous assumptions it wouldn't or shouldn't work as well as it does. Deep learning theory poses a challenge for mathematics, because its methods aren't rooted in mathematical theory and therefore are a "black box" for math to open. We discuss how Sanjeev thinks optimization, the common framework for thinking of how deep nets learn, is the wrong approach. Instead, a promising alternative focuses on the learning trajectories that occur as a result of different learning algorithms. We discuss two examples of his research to illustrate this: creating deep nets with infinitely large layers (and the networks still find solutions among the infinite possible solutions!), and massively increasing the learning rate during training (the opposite of accepted wisdom, and yet, again, the network finds solutions!). We also discuss his past focus on computational complexity and how he doesn't share the current neuroscience optimism comparing brains to deep nets.</p>



<ul class="wp-block-list"><li><a href="https://www.cs.princeton.edu/~arora/">Sanjeev's website</a>.</li><li>His <a href="https://unsupervised.cs.princeton.edu/">Research group website</a>.</li><li>His blog: <a href="http://offconvex.github.io/">Off The Convex Path.</a></li><li>Papers we discuss<ul><li><a href="https://arxiv.org/abs/1904.11955">On Exact Computation with an Infinitely Wide Neural Net.</a></li><li><a href="https://arxiv.org/pdf/1910.07454.pdf">An Exponential Learning Rate Schedule for Deep Learning</a></li></ul></li><li>Related<ul><li>The episode with <a href="https://braininspired.co/podcast/52/">Andrew Saxe</a> covers related deep learning theory in episode 52.</li><li><a href="https://braininspired.co/wp-admin/post.php?post=1165&amp;action=edit">Omri Barak</a> discusses the importance of learning trajectories to understand RNNs in episode 97.</li><li>Sanjeev mentions <a href="https://ai.columbia.edu/faculty/christos-papadimitriou">Christos Papadimitriou</a>.</li></ul></li></ul>



<p>Timestamps
0:00 - Intro
7:32 - Computational complexity
12:25 - Algorithms
13:45 - Deep learning vs. traditional optimization
17:01 - Evolving view of deep learning
18:33 - Reproducibility crisis in AI?
21:12 - Surprising effectiveness of deep learning
27:50 - "Optimization" isn't the right framework
30:08 - Infinitely wide nets
35:41 - Exponential learning rates
42:39 - Data as the next frontier
44:12 - Neuroscience and AI differences
47:13 - Focus on algorithms, architecture, and objective functions
55:50 - Advice for deep learning theorists
58:05 - Decoding minds</p>]]></description>
	<itunes:subtitle><![CDATA[Sanjeev and I discuss some of the progress toward understanding how deep learning works, specially under previous assumptions it wouldnt or shouldnt work as well as it does. Deep learning theory poses a challenge for mathematics, because its methods aren]]></itunes:subtitle>
	<content:encoded><![CDATA[<p>Sanjeev and I discuss some of the progress toward understanding how deep learning works, specially under previous assumptions it wouldn't or shouldn't work as well as it does. Deep learning theory poses a challenge for mathematics, because its methods aren't rooted in mathematical theory and therefore are a "black box" for math to open. We discuss how Sanjeev thinks optimization, the common framework for thinking of how deep nets learn, is the wrong approach. Instead, a promising alternative focuses on the learning trajectories that occur as a result of different learning algorithms. We discuss two examples of his research to illustrate this: creating deep nets with infinitely large layers (and the networks still find solutions among the infinite possible solutions!), and massively increasing the learning rate during training (the opposite of accepted wisdom, and yet, again, the network finds solutions!). We also discuss his past focus on computational complexity and how he doesn't share the current neuroscience optimism comparing brains to deep nets.</p>



<ul class="wp-block-list"><li><a href="https://www.cs.princeton.edu/~arora/">Sanjeev's website</a>.</li><li>His <a href="https://unsupervised.cs.princeton.edu/">Research group website</a>.</li><li>His blog: <a href="http://offconvex.github.io/">Off The Convex Path.</a></li><li>Papers we discuss<ul><li><a href="https://arxiv.org/abs/1904.11955">On Exact Computation with an Infinitely Wide Neural Net.</a></li><li><a href="https://arxiv.org/pdf/1910.07454.pdf">An Exponential Learning Rate Schedule for Deep Learning</a></li></ul></li><li>Related<ul><li>The episode with <a href="https://braininspired.co/podcast/52/">Andrew Saxe</a> covers related deep learning theory in episode 52.</li><li><a href="https://braininspired.co/wp-admin/post.php?post=1165&amp;action=edit">Omri Barak</a> discusses the importance of learning trajectories to understand RNNs in episode 97.</li><li>Sanjeev mentions <a href="https://ai.columbia.edu/faculty/christos-papadimitriou">Christos Papadimitriou</a>.</li></ul></li></ul>



<p>Timestamps
0:00 - Intro
7:32 - Computational complexity
12:25 - Algorithms
13:45 - Deep learning vs. traditional optimization
17:01 - Evolving view of deep learning
18:33 - Reproducibility crisis in AI?
21:12 - Surprising effectiveness of deep learning
27:50 - "Optimization" isn't the right framework
30:08 - Infinitely wide nets
35:41 - Exponential learning rates
42:39 - Data as the next frontier
44:12 - Neuroscience and AI differences
47:13 - Focus on algorithms, architecture, and objective functions
55:50 - Advice for deep learning theorists
58:05 - Decoding minds</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1229/105.mp3" length="59548189" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Sanjeev and I discuss some of the progress toward understanding how deep learning works, specially under previous assumptions it wouldn't or shouldn't work as well as it does. Deep learning theory poses a challenge for mathematics, because its methods aren't rooted in mathematical theory and therefore are a "black box" for math to open. We discuss how Sanjeev thinks optimization, the common framework for thinking of how deep nets learn, is the wrong approach. Instead, a promising alternative focuses on the learning trajectories that occur as a result of different learning algorithms. We discuss two examples of his research to illustrate this: creating deep nets with infinitely large layers (and the networks still find solutions among the infinite possible solutions!), and massively increasing the learning rate during training (the opposite of accepted wisdom, and yet, again, the network finds solutions!). We also discuss his past focus on computational complexity and how he doesn't share the current neuroscience optimism comparing brains to deep nets.



Sanjeev's website.His Research group website.His blog: Off The Convex Path.Papers we discussOn Exact Computation with an Infinitely Wide Neural Net.An Exponential Learning Rate Schedule for Deep LearningRelatedThe episode with Andrew Saxe covers related deep learning theory in episode 52.Omri Barak discusses the importance of learning trajectories to understand RNNs in episode 97.Sanjeev mentions Christos Papadimitriou.



Timestamps
0:00 - Intro
7:32 - Computational complexity
12:25 - Algorithms
13:45 - Deep learning vs. traditional optimization
17:01 - Evolving view of deep learning
18:33 - Reproducibility crisis in AI?
21:12 - Surprising effectiveness of deep learning
27:50 - "Optimization" isn't the right framework
30:08 - Infinitely wide nets
35:41 - Exponential learning rates
42:39 - Data as the next frontier
44:12 - Neuroscience and AI differences
47:13 - Focus on algorithms, architecture, and objective functions
55:50 - Advice for deep learning theorists
58:05 - Decoding minds]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2021/05/art-arora-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:01:43</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Sanjeev and I discuss some of the progress toward understanding how deep learning works, specially under previous assumptions it wouldn't or shouldn't work as well as it does. Deep learning theory poses a challenge for mathematics, because its methods aren't rooted in mathematical theory and therefore are a "black box" for math to open. We discuss how Sanjeev thinks optimization, the common framework for thinking of how deep nets learn, is the wrong approach. Instead, a promising alternative focuses on the learning trajectories that occur as a result of different learning algorithms. We discuss two examples of his research to illustrate this: creating deep nets with infinitely large layers (and the networks still find solutions among the infinite possible solutions!), and massively increasing the learning rate during training (the opposite of accepted wisdom, and yet, again, the network finds solutions!). We also discuss his past focus on computational complexity and how he doesn't sh]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2021/05/art-arora-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 104 John Kounios and David Rosen: Creativity, Expertise, Insight</title>
	<link>https://braininspired.co/podcast/104/</link>
	<pubDate>Fri, 07 May 2021 16:34:07 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1222</guid>
	<description><![CDATA[<p>What is creativity? How do we measure it? How do our brains implement it, and how might AI?Those are some of the questions John, David, and I discuss. The neuroscience of creativity is young, in its "wild west" days still. We talk about a few creativity studies they've performed that distinguish different creative processes with respect to different levels of expertise (in this case, in jazz improvisation), and the underlying brain circuits and activity, including using transcranial direct current stimulation to alter the creative process. Related to creativity, we also discuss the phenomenon and neuroscience of insight (the topic of John's book, <a href="https://www.amazon.com/gp/product/1079002251/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=1079002251&amp;linkId=3c8899852e62756cee3d2d0bc9285a32">The Eureka Factor</a>), unconscious automatic type 1 processes versus conscious deliberate type 2 processes, states of flow, creative process versus creative products, and a lot more.</p>





<ul class="wp-block-list"><li><a href="https://sites.google.com/site/johnkounios/">John Kounios</a>.</li><li><a href="https://www.secretchordlaboratories.com/">Secret Chord Laboratories</a> (David's company).</li><li>Twitter: <a href="https://twitter.com/JohnKounios">@JohnKounios</a>; <a href="https://twitter.com/NeuroBassDave" target="_blank" rel="noreferrer noopener">@NeuroBassDave</a>.</li><li>John's book (with Mark Beeman) on insight and creativity.<ul><li><a href="https://www.amazon.com/gp/product/1079002251/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=1079002251&amp;linkId=3c8899852e62756cee3d2d0bc9285a32">The Eureka Factor: Aha Moments, </a><a rel="noreferrer noopener" href="https://www.amazon.com/gp/product/1079002251/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=1079002251&amp;linkId=3c8899852e62756cee3d2d0bc9285a32" target="_blank">Creative</a><a href="https://www.amazon.com/gp/product/1079002251/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=1079002251&amp;linkId=3c8899852e62756cee3d2d0bc9285a32"> Insight, and the Brain</a>.</li></ul></li><li>The papers we discuss or mention:<ul><li><a href="https://www.researchgate.net/publication/315908560_All_You_Need_to_Do_Is_Ask_The_Exhortation_to_Be_Creative_Improves_Creative_Performance_More_for_Nonexpert_Than_Expert_Jazz_Musicians">All You Need to Do Is Ask? The Exhortation to Be Creative Improves Creative Performance More for Nonexpert Than Expert Jazz Musicians</a></li><li><a href="https://www.frontiersin.org/articles/10.3389/fnhum.2016.00579/full">Anodal tDCS to Right Dorsolateral Prefrontal Cortex Facilitates Performance for Novice Jazz Improvisers but Hinders Experts</a></li><li><a href="https://www.sciencedirect.com/science/article/pii/S1053811920301191">Dual-process contributions to creativity in jazz improvisations: An SPM-EEG study</a>.</li></ul></li></ul>



<p>Timestamps
0:00 - Intro
16:20 - Where are we broadly in science of creativity?
18:23 - Origins of creativity research
22:14 - Divergent and convergent thought
26:31 - Secret Chord Labs
32:40 - Familiar surprise
38:55 - The Eureka Factor
42:27 - Dual process model
52:54 - Creativity and jazz expertise
55:53 - "Be creative" behavioral study
59:17 - Stimulating the creative brain
1:02:04 - Brain circuits underlying creativity
1:14:36 - What does this tell us about creativity?
1:16:48 - Intelligence vs. creativity
1:18:25 - Switching between creative modes
1:25:57 - Flow states and insight
1:34:29 - Creativity and insight in AI
1:43:26 - Creative products vs. process</p>]]></description>
	<itunes:subtitle><![CDATA[What is creativity? How do we measure it? How do our brains implement it, and how might AI?Those are some of the questions John, David, and I discuss. The neuroscience of creativity is young, in its wild west days still. We talk about a few creativity st]]></itunes:subtitle>
	<content:encoded><![CDATA[<p>What is creativity? How do we measure it? How do our brains implement it, and how might AI?Those are some of the questions John, David, and I discuss. The neuroscience of creativity is young, in its "wild west" days still. We talk about a few creativity studies they've performed that distinguish different creative processes with respect to different levels of expertise (in this case, in jazz improvisation), and the underlying brain circuits and activity, including using transcranial direct current stimulation to alter the creative process. Related to creativity, we also discuss the phenomenon and neuroscience of insight (the topic of John's book, <a href="https://www.amazon.com/gp/product/1079002251/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=1079002251&amp;linkId=3c8899852e62756cee3d2d0bc9285a32">The Eureka Factor</a>), unconscious automatic type 1 processes versus conscious deliberate type 2 processes, states of flow, creative process versus creative products, and a lot more.</p>





<ul class="wp-block-list"><li><a href="https://sites.google.com/site/johnkounios/">John Kounios</a>.</li><li><a href="https://www.secretchordlaboratories.com/">Secret Chord Laboratories</a> (David's company).</li><li>Twitter: <a href="https://twitter.com/JohnKounios">@JohnKounios</a>; <a href="https://twitter.com/NeuroBassDave" target="_blank" rel="noreferrer noopener">@NeuroBassDave</a>.</li><li>John's book (with Mark Beeman) on insight and creativity.<ul><li><a href="https://www.amazon.com/gp/product/1079002251/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=1079002251&amp;linkId=3c8899852e62756cee3d2d0bc9285a32">The Eureka Factor: Aha Moments, </a><a rel="noreferrer noopener" href="https://www.amazon.com/gp/product/1079002251/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=1079002251&amp;linkId=3c8899852e62756cee3d2d0bc9285a32" target="_blank">Creative</a><a href="https://www.amazon.com/gp/product/1079002251/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=1079002251&amp;linkId=3c8899852e62756cee3d2d0bc9285a32"> Insight, and the Brain</a>.</li></ul></li><li>The papers we discuss or mention:<ul><li><a href="https://www.researchgate.net/publication/315908560_All_You_Need_to_Do_Is_Ask_The_Exhortation_to_Be_Creative_Improves_Creative_Performance_More_for_Nonexpert_Than_Expert_Jazz_Musicians">All You Need to Do Is Ask? The Exhortation to Be Creative Improves Creative Performance More for Nonexpert Than Expert Jazz Musicians</a></li><li><a href="https://www.frontiersin.org/articles/10.3389/fnhum.2016.00579/full">Anodal tDCS to Right Dorsolateral Prefrontal Cortex Facilitates Performance for Novice Jazz Improvisers but Hinders Experts</a></li><li><a href="https://www.sciencedirect.com/science/article/pii/S1053811920301191">Dual-process contributions to creativity in jazz improvisations: An SPM-EEG study</a>.</li></ul></li></ul>



<p>Timestamps
0:00 - Intro
16:20 - Where are we broadly in science of creativity?
18:23 - Origins of creativity research
22:14 - Divergent and convergent thought
26:31 - Secret Chord Labs
32:40 - Familiar surprise
38:55 - The Eureka Factor
42:27 - Dual process model
52:54 - Creativity and jazz expertise
55:53 - "Be creative" behavioral study
59:17 - Stimulating the creative brain
1:02:04 - Brain circuits underlying creativity
1:14:36 - What does this tell us about creativity?
1:16:48 - Intelligence vs. creativity
1:18:25 - Switching between creative modes
1:25:57 - Flow states and insight
1:34:29 - Creativity and insight in AI
1:43:26 - Creative products vs. process</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1222/104.mp3" length="106423489" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[What is creativity? How do we measure it? How do our brains implement it, and how might AI?Those are some of the questions John, David, and I discuss. The neuroscience of creativity is young, in its "wild west" days still. We talk about a few creativity studies they've performed that distinguish different creative processes with respect to different levels of expertise (in this case, in jazz improvisation), and the underlying brain circuits and activity, including using transcranial direct current stimulation to alter the creative process. Related to creativity, we also discuss the phenomenon and neuroscience of insight (the topic of John's book, The Eureka Factor), unconscious automatic type 1 processes versus conscious deliberate type 2 processes, states of flow, creative process versus creative products, and a lot more.





John Kounios.Secret Chord Laboratories (David's company).Twitter: @JohnKounios; @NeuroBassDave.John's book (with Mark Beeman) on insight and creativity.The Eureka Factor: Aha Moments, Creative Insight, and the Brain.The papers we discuss or mention:All You Need to Do Is Ask? The Exhortation to Be Creative Improves Creative Performance More for Nonexpert Than Expert Jazz MusiciansAnodal tDCS to Right Dorsolateral Prefrontal Cortex Facilitates Performance for Novice Jazz Improvisers but Hinders ExpertsDual-process contributions to creativity in jazz improvisations: An SPM-EEG study.



Timestamps
0:00 - Intro
16:20 - Where are we broadly in science of creativity?
18:23 - Origins of creativity research
22:14 - Divergent and convergent thought
26:31 - Secret Chord Labs
32:40 - Familiar surprise
38:55 - The Eureka Factor
42:27 - Dual process model
52:54 - Creativity and jazz expertise
55:53 - "Be creative" behavioral study
59:17 - Stimulating the creative brain
1:02:04 - Brain circuits underlying creativity
1:14:36 - What does this tell us about creativity?
1:16:48 - Intelligence vs. creativity
1:18:25 - Switching between creative modes
1:25:57 - Flow states and insight
1:34:29 - Creativity and insight in AI
1:43:26 - Creative products vs. process]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2021/05/art-rosen-kounios-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:50:32</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[What is creativity? How do we measure it? How do our brains implement it, and how might AI?Those are some of the questions John, David, and I discuss. The neuroscience of creativity is young, in its "wild west" days still. We talk about a few creativity studies they've performed that distinguish different creative processes with respect to different levels of expertise (in this case, in jazz improvisation), and the underlying brain circuits and activity, including using transcranial direct current stimulation to alter the creative process. Related to creativity, we also discuss the phenomenon and neuroscience of insight (the topic of John's book, The Eureka Factor), unconscious automatic type 1 processes versus conscious deliberate type 2 processes, states of flow, creative process versus creative products, and a lot more.





John Kounios.Secret Chord Laboratories (David's company).Twitter: @JohnKounios; @NeuroBassDave.John's book (with Mark Beeman) on insight and creativity.The Eur]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2021/05/art-rosen-kounios-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 103 Randal Koene and Ken Hayworth: The Road to Mind Uploading</title>
	<link>https://braininspired.co/podcast/103/</link>
	<pubDate>Mon, 26 Apr 2021 14:19:54 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1215</guid>
	<description><![CDATA[<p>Randal, Ken, and I discuss a host of topics around the future goal of uploading our minds into non-brain systems, to continue our mental lives and expand our range of experiences. The basic requirement for such a subtrate-independent mind is to implement whole brain emulation. We discuss two basic approaches to whole brain emulation. The "scan and copy" approach proposes we somehow scan the entire structure of our brains (at whatever scale is necessary) and store that scan until some future date when we have figured out how to us that information to build a substrate that can house your mind. The "gradual replacement" approach proposes we slowly replace parts of the brain with functioning alternative machines, eventually replacing the entire brain with non-biological material and yet retaining a functioning mind.</p>



<p>
Randal and Ken are neuroscientists who understand the magnitude and challenges of a massive project like mind uploading, who also understand what we can do right now, with current technology, to advance toward that lofty goal, and who are thoughtful about what steps we need to take to enable further advancements.</p>



<ul class="wp-block-list"><li>Randal A Koene<ul><li>Twitter: <a href="https://twitter.com/randalkoene">@</a><a href="https://twitter.com/randalkoene" target="_blank" rel="noreferrer noopener">randalkoene</a></li><li><a href="https://carboncopies.org/">Carboncopies Foundation</a>.</li><li><a href="https://www.randalkoene.com/home" target="_blank" rel="noreferrer noopener">Randal's website</a>.</li></ul></li><li>Ken Hayworth<ul><li>Twitter: <a href="https://twitter.com/KennethHayworth">@KennethHayworth</a></li><li><a href="https://www.brainpreservation.org/">Brain Preservation Foundation</a>.<ul><li><a href="https://www.youtube.com/user/brainpreservation">Youtube videos</a>.</li></ul></li></ul></li></ul>



<p>Timestamps
0:00 - Intro
6:14 - What Ken wants
11:22 - What Randal wants
22:29 - Brain preservation
27:18 - Aldehyde stabilized cryopreservation
31:51 - Scan and copy vs. gradual replacement
38:25 - Building a roadmap
49:45 - Limits of current experimental paradigms
53:51 - Our evolved brains
1:06:58 - Counterarguments
1:10:31 - Animal models for whole brain emulation
1:15:01 - Understanding vs. emulating brains
1:22:37 - Current challenges</p>]]></description>
	<itunes:subtitle><![CDATA[Randal, Ken, and I discuss a host of topics around the future goal of uploading our minds into non-brain systems, to continue our mental lives and expand our range of experiences. The basic requirement for such a subtrate-independent mind is to implement]]></itunes:subtitle>
	<content:encoded><![CDATA[<p>Randal, Ken, and I discuss a host of topics around the future goal of uploading our minds into non-brain systems, to continue our mental lives and expand our range of experiences. The basic requirement for such a subtrate-independent mind is to implement whole brain emulation. We discuss two basic approaches to whole brain emulation. The "scan and copy" approach proposes we somehow scan the entire structure of our brains (at whatever scale is necessary) and store that scan until some future date when we have figured out how to us that information to build a substrate that can house your mind. The "gradual replacement" approach proposes we slowly replace parts of the brain with functioning alternative machines, eventually replacing the entire brain with non-biological material and yet retaining a functioning mind.</p>



<p>
Randal and Ken are neuroscientists who understand the magnitude and challenges of a massive project like mind uploading, who also understand what we can do right now, with current technology, to advance toward that lofty goal, and who are thoughtful about what steps we need to take to enable further advancements.</p>



<ul class="wp-block-list"><li>Randal A Koene<ul><li>Twitter: <a href="https://twitter.com/randalkoene">@</a><a href="https://twitter.com/randalkoene" target="_blank" rel="noreferrer noopener">randalkoene</a></li><li><a href="https://carboncopies.org/">Carboncopies Foundation</a>.</li><li><a href="https://www.randalkoene.com/home" target="_blank" rel="noreferrer noopener">Randal's website</a>.</li></ul></li><li>Ken Hayworth<ul><li>Twitter: <a href="https://twitter.com/KennethHayworth">@KennethHayworth</a></li><li><a href="https://www.brainpreservation.org/">Brain Preservation Foundation</a>.<ul><li><a href="https://www.youtube.com/user/brainpreservation">Youtube videos</a>.</li></ul></li></ul></li></ul>



<p>Timestamps
0:00 - Intro
6:14 - What Ken wants
11:22 - What Randal wants
22:29 - Brain preservation
27:18 - Aldehyde stabilized cryopreservation
31:51 - Scan and copy vs. gradual replacement
38:25 - Building a roadmap
49:45 - Limits of current experimental paradigms
53:51 - Our evolved brains
1:06:58 - Counterarguments
1:10:31 - Animal models for whole brain emulation
1:15:01 - Understanding vs. emulating brains
1:22:37 - Current challenges</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1215/103.mp3" length="84235073" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Randal, Ken, and I discuss a host of topics around the future goal of uploading our minds into non-brain systems, to continue our mental lives and expand our range of experiences. The basic requirement for such a subtrate-independent mind is to implement whole brain emulation. We discuss two basic approaches to whole brain emulation. The "scan and copy" approach proposes we somehow scan the entire structure of our brains (at whatever scale is necessary) and store that scan until some future date when we have figured out how to us that information to build a substrate that can house your mind. The "gradual replacement" approach proposes we slowly replace parts of the brain with functioning alternative machines, eventually replacing the entire brain with non-biological material and yet retaining a functioning mind.




Randal and Ken are neuroscientists who understand the magnitude and challenges of a massive project like mind uploading, who also understand what we can do right now, with current technology, to advance toward that lofty goal, and who are thoughtful about what steps we need to take to enable further advancements.



Randal A KoeneTwitter: @randalkoeneCarboncopies Foundation.Randal's website.Ken HayworthTwitter: @KennethHayworthBrain Preservation Foundation.Youtube videos.



Timestamps
0:00 - Intro
6:14 - What Ken wants
11:22 - What Randal wants
22:29 - Brain preservation
27:18 - Aldehyde stabilized cryopreservation
31:51 - Scan and copy vs. gradual replacement
38:25 - Building a roadmap
49:45 - Limits of current experimental paradigms
53:51 - Our evolved brains
1:06:58 - Counterarguments
1:10:31 - Animal models for whole brain emulation
1:15:01 - Understanding vs. emulating brains
1:22:37 - Current challenges]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2021/04/art-koene-hayworth-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:27:26</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Randal, Ken, and I discuss a host of topics around the future goal of uploading our minds into non-brain systems, to continue our mental lives and expand our range of experiences. The basic requirement for such a subtrate-independent mind is to implement whole brain emulation. We discuss two basic approaches to whole brain emulation. The "scan and copy" approach proposes we somehow scan the entire structure of our brains (at whatever scale is necessary) and store that scan until some future date when we have figured out how to us that information to build a substrate that can house your mind. The "gradual replacement" approach proposes we slowly replace parts of the brain with functioning alternative machines, eventually replacing the entire brain with non-biological material and yet retaining a functioning mind.




Randal and Ken are neuroscientists who understand the magnitude and challenges of a massive project like mind uploading, who also understand what we can do right now, wit]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2021/04/art-koene-hayworth-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 102 Mark Humphries: What Is It Like To Be A Spike?</title>
	<link>https://braininspired.co/podcast/102/</link>
	<pubDate>Fri, 16 Apr 2021 14:58:34 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1209</guid>
	<description><![CDATA[<p>Mark and I discuss his book, <a rel="noreferrer noopener" href="https://www.amazon.com/gp/product/0691195889/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=0691195889&amp;linkId=862612dc7cf84f4586028b1eb2b9ef11" target="_blank">The Spike: An Epic Journey Through the Brain in 2.1 Seconds</a>. It chronicles how a series of action potentials fire through the brain in a couple seconds of someone's life. Starting with light hitting the retina as a person looks at a cookie, Mark describes how that light gets translated into spikes,  how those spikes get processed in our visual system and eventually transform into motor commands to grab that cookie. Along the way, he describes some of the big ideas throughout the history of studying brains (like the mechanisms to explain how neurons seem to fire so randomly), the big mysteries we currently face (like why do so many neurons do so little?), and some of the main theories to explain those mysteries (we're prediction machines!). A fun read and discussion. This is Mark's second time on the podcast - <a href="https://braininspired.co/podcast/bi-004-mark-humphries-learning-to-remember/">he was on episode 4</a> in the early days, talking more in depth about some of the work we discuss in this episode!</p>





<ul class="wp-block-list"><li><a href="https://www.humphries-lab.org/">The Humphries Lab</a>.</li><li>Twitter: <a href="https://twitter.com/markdhumphries">@markdhumphries</a></li><li>Book: <a rel="noreferrer noopener" href="https://www.amazon.com/gp/product/0691195889/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=0691195889&amp;linkId=862612dc7cf84f4586028b1eb2b9ef11" target="_blank">The Spike: An Epic Journey Through the Brain in 2.1 Seconds</a>.</li><li>Related papers<ul><li><a href="https://elifesciences.org/articles/27342">A spiral attractor network drives rhythmic locomotion</a>.</li></ul></li></ul>



<p>Timestamps:</p>



<p>0:00 - Intro
3:25 - Writing a book
15:37 - Mark's main interest
19:41 - Future explanation of brain/mind
27:00 - Stochasticity and excitation/inhibition balance
36:56 - Dendritic computation for network dynamics
39:10 - Do details matter for AI?
44:06 - Spike failure
51:12 - Dark neurons
1:07:57 - Intrinsic spontaneous activity
1:16:16 - Best scientific moment
1:23:58 - Failure
1:28:45 - Advice</p>]]></description>
	<itunes:subtitle><![CDATA[Mark and I discuss his book, The Spike: An Epic Journey Through the Brain in 2.1 Seconds. It chronicles how a series of action potentials fire through the brain in a couple seconds of someones life. Starting with light hitting the retina as a person look]]></itunes:subtitle>
	<content:encoded><![CDATA[<p>Mark and I discuss his book, <a rel="noreferrer noopener" href="https://www.amazon.com/gp/product/0691195889/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=0691195889&amp;linkId=862612dc7cf84f4586028b1eb2b9ef11" target="_blank">The Spike: An Epic Journey Through the Brain in 2.1 Seconds</a>. It chronicles how a series of action potentials fire through the brain in a couple seconds of someone's life. Starting with light hitting the retina as a person looks at a cookie, Mark describes how that light gets translated into spikes,  how those spikes get processed in our visual system and eventually transform into motor commands to grab that cookie. Along the way, he describes some of the big ideas throughout the history of studying brains (like the mechanisms to explain how neurons seem to fire so randomly), the big mysteries we currently face (like why do so many neurons do so little?), and some of the main theories to explain those mysteries (we're prediction machines!). A fun read and discussion. This is Mark's second time on the podcast - <a href="https://braininspired.co/podcast/bi-004-mark-humphries-learning-to-remember/">he was on episode 4</a> in the early days, talking more in depth about some of the work we discuss in this episode!</p>





<ul class="wp-block-list"><li><a href="https://www.humphries-lab.org/">The Humphries Lab</a>.</li><li>Twitter: <a href="https://twitter.com/markdhumphries">@markdhumphries</a></li><li>Book: <a rel="noreferrer noopener" href="https://www.amazon.com/gp/product/0691195889/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=0691195889&amp;linkId=862612dc7cf84f4586028b1eb2b9ef11" target="_blank">The Spike: An Epic Journey Through the Brain in 2.1 Seconds</a>.</li><li>Related papers<ul><li><a href="https://elifesciences.org/articles/27342">A spiral attractor network drives rhythmic locomotion</a>.</li></ul></li></ul>



<p>Timestamps:</p>



<p>0:00 - Intro
3:25 - Writing a book
15:37 - Mark's main interest
19:41 - Future explanation of brain/mind
27:00 - Stochasticity and excitation/inhibition balance
36:56 - Dendritic computation for network dynamics
39:10 - Do details matter for AI?
44:06 - Spike failure
51:12 - Dark neurons
1:07:57 - Intrinsic spontaneous activity
1:16:16 - Best scientific moment
1:23:58 - Failure
1:28:45 - Advice</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1209/102.mp3" length="88947053" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Mark and I discuss his book, The Spike: An Epic Journey Through the Brain in 2.1 Seconds. It chronicles how a series of action potentials fire through the brain in a couple seconds of someone's life. Starting with light hitting the retina as a person looks at a cookie, Mark describes how that light gets translated into spikes,  how those spikes get processed in our visual system and eventually transform into motor commands to grab that cookie. Along the way, he describes some of the big ideas throughout the history of studying brains (like the mechanisms to explain how neurons seem to fire so randomly), the big mysteries we currently face (like why do so many neurons do so little?), and some of the main theories to explain those mysteries (we're prediction machines!). A fun read and discussion. This is Mark's second time on the podcast - he was on episode 4 in the early days, talking more in depth about some of the work we discuss in this episode!





The Humphries Lab.Twitter: @markdhumphriesBook: The Spike: An Epic Journey Through the Brain in 2.1 Seconds.Related papersA spiral attractor network drives rhythmic locomotion.



Timestamps:



0:00 - Intro
3:25 - Writing a book
15:37 - Mark's main interest
19:41 - Future explanation of brain/mind
27:00 - Stochasticity and excitation/inhibition balance
36:56 - Dendritic computation for network dynamics
39:10 - Do details matter for AI?
44:06 - Spike failure
51:12 - Dark neurons
1:07:57 - Intrinsic spontaneous activity
1:16:16 - Best scientific moment
1:23:58 - Failure
1:28:45 - Advice]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2021/04/art-humphries-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:32:20</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Mark and I discuss his book, The Spike: An Epic Journey Through the Brain in 2.1 Seconds. It chronicles how a series of action potentials fire through the brain in a couple seconds of someone's life. Starting with light hitting the retina as a person looks at a cookie, Mark describes how that light gets translated into spikes,  how those spikes get processed in our visual system and eventually transform into motor commands to grab that cookie. Along the way, he describes some of the big ideas throughout the history of studying brains (like the mechanisms to explain how neurons seem to fire so randomly), the big mysteries we currently face (like why do so many neurons do so little?), and some of the main theories to explain those mysteries (we're prediction machines!). A fun read and discussion. This is Mark's second time on the podcast - he was on episode 4 in the early days, talking more in depth about some of the work we discuss in this episode!





The Humphries Lab.Twitter: @mark]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2021/04/art-humphries-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 101 Steve Potter: Motivating Brains In and Out of Dishes</title>
	<link>https://braininspired.co/podcast/101/</link>
	<pubDate>Tue, 06 Apr 2021 21:51:36 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1206</guid>
	<description><![CDATA[<p>Steve and I discuss his book, <a href="https://www.amazon.com/gp/product/1838172807/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=1838172807&amp;linkId=7f2ad551e783d3a00686770ee61da142">How to Motivate Your Students to Love Learning</a>, which is both a memoir and a guide for teachers and students to optimize the learning experience for intrinsic motivation. Steve taught neuroscience and engineering courses while running his own lab studying the activity of live cultured neural populations (which we discuss at length <a href="https://braininspired.co/podcast/bi-001-steven-potter-brains-in-dishes/">in his previous episode</a>). He relentlessly tested and tweaked his teaching methods, including constant feedback from the students, to optimize their learning experiences. He settled on real-world, project-based learning approaches, like writing wikipedia articles and helping groups of students design and carry out their own experiments. We discuss that, plus the science behind learning, principles important for motivating students and maintaining that motivation, and many of the other valuable insights he shares in the book.</p>





<p>
The first half of the episode we discuss diverse neuroscience and AI topics, like brain organoids, mind-uploading, synaptic plasticity, and more. Then we discuss many of the stories and lessons from his book, which I recommend for teachers, mentors, and life-long students who want to ensure they're optimizing their own&nbsp; learning.</p>



<ul class="wp-block-list"><li><a href="http://potterlab.gatech.edu/">Potter Lab</a>.</li><li>Twitter:&nbsp;<a href="https://twitter.com/stevempotter">@stevempotter</a>.</li><li>The Book: <a href="https://www.amazon.com/gp/product/1838172807/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=1838172807&amp;linkId=7f2ad551e783d3a00686770ee61da142">How to Motivate Your Students to Love Learning.</a></li><li><a href="https://drive.google.com/file/d/1AN1XASJkfNDTbVvIeP8hbO-AnWPEwtKN/view?usp=sharing">The glial cell activity movie</a>.</li></ul>



<p>0:00 - Intro
6:38 - Brain organoids
18:48 - Glial cell plasticity
24:50 - Whole brain emulation
35:28 - Industry vs. academia
45:32 - Intro to book: How To Motivate Your Students To Love Learning
48:29 - Steve's childhood influences
57:21 - Developing one's own intrinsic motivation
1:02:30 - Real-world assignments
1:08:00 - Keys to motivation
1:11:50 - Peer pressure
1:21:16 - Autonomy
1:25:38 - Wikipedia real-world assignment
1:33:12 - Relation to running a lab</p>]]></description>
	<itunes:subtitle><![CDATA[Steve and I discuss his book, How to Motivate Your Students to Love Learning, which is both a memoir and a guide for teachers and students to optimize the learning experience for intrinsic motivation. Steve taught neuroscience and engineering courses whi]]></itunes:subtitle>
	<content:encoded><![CDATA[<p>Steve and I discuss his book, <a href="https://www.amazon.com/gp/product/1838172807/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=1838172807&amp;linkId=7f2ad551e783d3a00686770ee61da142">How to Motivate Your Students to Love Learning</a>, which is both a memoir and a guide for teachers and students to optimize the learning experience for intrinsic motivation. Steve taught neuroscience and engineering courses while running his own lab studying the activity of live cultured neural populations (which we discuss at length <a href="https://braininspired.co/podcast/bi-001-steven-potter-brains-in-dishes/">in his previous episode</a>). He relentlessly tested and tweaked his teaching methods, including constant feedback from the students, to optimize their learning experiences. He settled on real-world, project-based learning approaches, like writing wikipedia articles and helping groups of students design and carry out their own experiments. We discuss that, plus the science behind learning, principles important for motivating students and maintaining that motivation, and many of the other valuable insights he shares in the book.</p>





<p>
The first half of the episode we discuss diverse neuroscience and AI topics, like brain organoids, mind-uploading, synaptic plasticity, and more. Then we discuss many of the stories and lessons from his book, which I recommend for teachers, mentors, and life-long students who want to ensure they're optimizing their own&nbsp; learning.</p>



<ul class="wp-block-list"><li><a href="http://potterlab.gatech.edu/">Potter Lab</a>.</li><li>Twitter:&nbsp;<a href="https://twitter.com/stevempotter">@stevempotter</a>.</li><li>The Book: <a href="https://www.amazon.com/gp/product/1838172807/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=pmiddlebroo09-20&amp;creative=9325&amp;linkCode=as2&amp;creativeASIN=1838172807&amp;linkId=7f2ad551e783d3a00686770ee61da142">How to Motivate Your Students to Love Learning.</a></li><li><a href="https://drive.google.com/file/d/1AN1XASJkfNDTbVvIeP8hbO-AnWPEwtKN/view?usp=sharing">The glial cell activity movie</a>.</li></ul>



<p>0:00 - Intro
6:38 - Brain organoids
18:48 - Glial cell plasticity
24:50 - Whole brain emulation
35:28 - Industry vs. academia
45:32 - Intro to book: How To Motivate Your Students To Love Learning
48:29 - Steve's childhood influences
57:21 - Developing one's own intrinsic motivation
1:02:30 - Real-world assignments
1:08:00 - Keys to motivation
1:11:50 - Peer pressure
1:21:16 - Autonomy
1:25:38 - Wikipedia real-world assignment
1:33:12 - Relation to running a lab</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1206/101.mp3" length="101459593" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Steve and I discuss his book, How to Motivate Your Students to Love Learning, which is both a memoir and a guide for teachers and students to optimize the learning experience for intrinsic motivation. Steve taught neuroscience and engineering courses while running his own lab studying the activity of live cultured neural populations (which we discuss at length in his previous episode). He relentlessly tested and tweaked his teaching methods, including constant feedback from the students, to optimize their learning experiences. He settled on real-world, project-based learning approaches, like writing wikipedia articles and helping groups of students design and carry out their own experiments. We discuss that, plus the science behind learning, principles important for motivating students and maintaining that motivation, and many of the other valuable insights he shares in the book.






The first half of the episode we discuss diverse neuroscience and AI topics, like brain organoids, mind-uploading, synaptic plasticity, and more. Then we discuss many of the stories and lessons from his book, which I recommend for teachers, mentors, and life-long students who want to ensure they're optimizing their own&nbsp; learning.



Potter Lab.Twitter:&nbsp;@stevempotter.The Book: How to Motivate Your Students to Love Learning.The glial cell activity movie.



0:00 - Intro
6:38 - Brain organoids
18:48 - Glial cell plasticity
24:50 - Whole brain emulation
35:28 - Industry vs. academia
45:32 - Intro to book: How To Motivate Your Students To Love Learning
48:29 - Steve's childhood influences
57:21 - Developing one's own intrinsic motivation
1:02:30 - Real-world assignments
1:08:00 - Keys to motivation
1:11:50 - Peer pressure
1:21:16 - Autonomy
1:25:38 - Wikipedia real-world assignment
1:33:12 - Relation to running a lab]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2021/04/art-potter-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:45:22</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Steve and I discuss his book, How to Motivate Your Students to Love Learning, which is both a memoir and a guide for teachers and students to optimize the learning experience for intrinsic motivation. Steve taught neuroscience and engineering courses while running his own lab studying the activity of live cultured neural populations (which we discuss at length in his previous episode). He relentlessly tested and tweaked his teaching methods, including constant feedback from the students, to optimize their learning experiences. He settled on real-world, project-based learning approaches, like writing wikipedia articles and helping groups of students design and carry out their own experiments. We discuss that, plus the science behind learning, principles important for motivating students and maintaining that motivation, and many of the other valuable insights he shares in the book.






The first half of the episode we discuss diverse neuroscience and AI topics, like brain organoids, m]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2021/04/art-potter-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 100.6 Special: Do We Have the Right Vocabulary and Concepts?</title>
	<link>https://braininspired.co/podcast/100-6/</link>
	<pubDate>Sun, 28 Mar 2021 21:04:17 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1187</guid>
	<description><![CDATA[<p>We made it to the last bit of our 100th episode celebration. These have been super fun for me, and I hope you've enjoyed the collections as well. If you're wondering where the missing 5th part is, I reserved it exclusively for Brain Inspired's magnificent <a href="https://www.patreon.com/braininspired">Patreon supporters</a> (thanks guys!!!!). The final question I sent to previous guests:</p>



<p><strong>Do we already have the right vocabulary and concepts to explain how brains and minds are related? Why or why not?</strong></p>



<p>Timestamps:</p>



<p>0:00 - Intro
5:04 - <a href="https://braininspired.co/podcast/52/">Andrew Saxe</a>
7:04 - <a href="https://braininspired.co/podcast/55/">Thomas Naselaris</a>
7:46 - <a href="https://braininspired.co/podcast/77/">John Krakauer</a>
9:03 - <a href="https://braininspired.co/podcast/33/">Federico Turkheimer</a>
11:57 - <a href="https://braininspired.co/podcast/bi-001-steven-potter-brains-in-dishes/">Steve Potter</a>
13:31 - <a href="https://braininspired.co/podcast/77/">David Krakauer</a>
17:22 - <a href="https://braininspired.co/podcast/18/">Dean Buonomano</a>
20:28 - <a href="https://braininspired.co/podcast/27/">Konrad Kording</a>
22:00 - <a href="https://braininspired.co/podcast/63/">Uri Hasson</a>
23:15 - <a href="https://braininspired.co/podcast/68/">Rodrigo Quian Quiroga</a>
24:41 -<a href="https://braininspired.co/podcast/75/"> Jim DiCarlo</a>
25:26 - <a href="https://braininspired.co/podcast/23/">Marcel van Gerven</a>
28:02 - <a href="https://braininspired.co/podcast/72/">Mazviita Chirimuuta</a>
29:27 - <a href="https://braininspired.co/podcast/70/">Brad Love</a>
31:23 - <a href="https://braininspired.co/podcast/71/">Patrick Mayo</a>
32:30 - <a href="https://braininspired.co/podcast/84/">György Buzsáki</a>
37:07 - <a href="https://braininspired.co/podcast/81/">Pieter Roelfsema</a>
37:26 - <a href="https://braininspired.co/podcast/84/">David Poeppel</a>
40:22 - <a href="https://braininspired.co/podcast/66/">Paul Cisek</a>
44:52 - <a href="https://braininspired.co/podcast/44/">Talia Konkle</a>
47:03 - <a href="https://braininspired.co/podcast/82/">Steve Grossberg</a>
</p>]]></description>
	<itunes:subtitle><![CDATA[We made it to the last bit of our 100th episode celebration. These have been super fun for me, and I hope youve enjoyed the collections as well. If youre wondering where the missing 5th part is, I reserved it exclusively for Brain Inspireds magnificent P]]></itunes:subtitle>
	<content:encoded><![CDATA[<p>We made it to the last bit of our 100th episode celebration. These have been super fun for me, and I hope you've enjoyed the collections as well. If you're wondering where the missing 5th part is, I reserved it exclusively for Brain Inspired's magnificent <a href="https://www.patreon.com/braininspired">Patreon supporters</a> (thanks guys!!!!). The final question I sent to previous guests:</p>



<p><strong>Do we already have the right vocabulary and concepts to explain how brains and minds are related? Why or why not?</strong></p>



<p>Timestamps:</p>



<p>0:00 - Intro
5:04 - <a href="https://braininspired.co/podcast/52/">Andrew Saxe</a>
7:04 - <a href="https://braininspired.co/podcast/55/">Thomas Naselaris</a>
7:46 - <a href="https://braininspired.co/podcast/77/">John Krakauer</a>
9:03 - <a href="https://braininspired.co/podcast/33/">Federico Turkheimer</a>
11:57 - <a href="https://braininspired.co/podcast/bi-001-steven-potter-brains-in-dishes/">Steve Potter</a>
13:31 - <a href="https://braininspired.co/podcast/77/">David Krakauer</a>
17:22 - <a href="https://braininspired.co/podcast/18/">Dean Buonomano</a>
20:28 - <a href="https://braininspired.co/podcast/27/">Konrad Kording</a>
22:00 - <a href="https://braininspired.co/podcast/63/">Uri Hasson</a>
23:15 - <a href="https://braininspired.co/podcast/68/">Rodrigo Quian Quiroga</a>
24:41 -<a href="https://braininspired.co/podcast/75/"> Jim DiCarlo</a>
25:26 - <a href="https://braininspired.co/podcast/23/">Marcel van Gerven</a>
28:02 - <a href="https://braininspired.co/podcast/72/">Mazviita Chirimuuta</a>
29:27 - <a href="https://braininspired.co/podcast/70/">Brad Love</a>
31:23 - <a href="https://braininspired.co/podcast/71/">Patrick Mayo</a>
32:30 - <a href="https://braininspired.co/podcast/84/">György Buzsáki</a>
37:07 - <a href="https://braininspired.co/podcast/81/">Pieter Roelfsema</a>
37:26 - <a href="https://braininspired.co/podcast/84/">David Poeppel</a>
40:22 - <a href="https://braininspired.co/podcast/66/">Paul Cisek</a>
44:52 - <a href="https://braininspired.co/podcast/44/">Talia Konkle</a>
47:03 - <a href="https://braininspired.co/podcast/82/">Steve Grossberg</a>
</p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1187/100-6.mp3" length="48346621" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[We made it to the last bit of our 100th episode celebration. These have been super fun for me, and I hope you've enjoyed the collections as well. If you're wondering where the missing 5th part is, I reserved it exclusively for Brain Inspired's magnificent Patreon supporters (thanks guys!!!!). The final question I sent to previous guests:



Do we already have the right vocabulary and concepts to explain how brains and minds are related? Why or why not?



Timestamps:



0:00 - Intro
5:04 - Andrew Saxe
7:04 - Thomas Naselaris
7:46 - John Krakauer
9:03 - Federico Turkheimer
11:57 - Steve Potter
13:31 - David Krakauer
17:22 - Dean Buonomano
20:28 - Konrad Kording
22:00 - Uri Hasson
23:15 - Rodrigo Quian Quiroga
24:41 - Jim DiCarlo
25:26 - Marcel van Gerven
28:02 - Mazviita Chirimuuta
29:27 - Brad Love
31:23 - Patrick Mayo
32:30 - György Buzsáki
37:07 - Pieter Roelfsema
37:26 - David Poeppel
40:22 - Paul Cisek
44:52 - Talia Konkle
47:03 - Steve Grossberg]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2021/03/art-100-6-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>00:50:03</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[We made it to the last bit of our 100th episode celebration. These have been super fun for me, and I hope you've enjoyed the collections as well. If you're wondering where the missing 5th part is, I reserved it exclusively for Brain Inspired's magnificent Patreon supporters (thanks guys!!!!). The final question I sent to previous guests:



Do we already have the right vocabulary and concepts to explain how brains and minds are related? Why or why not?



Timestamps:



0:00 - Intro
5:04 - Andrew Saxe
7:04 - Thomas Naselaris
7:46 - John Krakauer
9:03 - Federico Turkheimer
11:57 - Steve Potter
13:31 - David Krakauer
17:22 - Dean Buonomano
20:28 - Konrad Kording
22:00 - Uri Hasson
23:15 - Rodrigo Quian Quiroga
24:41 - Jim DiCarlo
25:26 - Marcel van Gerven
28:02 - Mazviita Chirimuuta
29:27 - Brad Love
31:23 - Patrick Mayo
32:30 - György Buzsáki
37:07 - Pieter Roelfsema
37:26 - David Poeppel
40:22 - Paul Cisek
44:52 - Talia Konkle
47:03 - Steve Grossberg]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2021/03/art-100-6-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 100.4 Special: What Ideas Are Holding Us Back?</title>
	<link>https://braininspired.co/podcast/100-4/</link>
	<pubDate>Sun, 21 Mar 2021 16:29:11 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1185</guid>
	<description><![CDATA[<p>In the 4th installment of our 100th episode celebration, previous guests responded to the question:</p>



<p><strong>What ideas, assumptions, or terms do you think is holding back neuroscience/AI, and why?</strong></p>



<p>As usual, the responses are varied and wonderful!</p>







<p>Timestamps:</p>



<p>0:00 - Intro
6:41 - <a href="https://braininspired.co/podcast/81/">Pieter Roelfsema</a>
7:52 - <a href="https://braininspired.co/podcast/11/">Grace Lindsay</a>
10:23 - <a href="https://braininspired.co/podcast/23/">Marcel van Gerven</a>
11:38 - <a href="https://braininspired.co/podcast/52/">Andrew Saxe</a>
14:05 - <a href="https://braininspired.co/podcast/83/">Jane Wang</a>
16:50 - <a href="https://braininspired.co/podcast/55/">Thomas Naselaris</a>
18:14 - <a href="https://braininspired.co/podcast/bi-001-steven-potter-brains-in-dishes/">Steve Potter</a>
19:18 - <a href="https://braininspired.co/podcast/26/">Kendrick Kay</a>
22:17 - <a href="https://braininspired.co/podcast/9/">Blake Richards</a>
27:52 -<a href="https://braininspired.co/podcast/30/"> Jay McClelland</a>
30:13 - <a href="https://braininspired.co/podcast/75/">Jim DiCarlo</a>
31:17 - <a href="https://braininspired.co/podcast/44/">Talia Konkle</a>
33:27 - <a href="https://braininspired.co/podcast/63/">Uri Hasson</a>
35:37 - <a href="https://braininspired.co/podcast/58/">Wolfgang Maass
</a>38:48 - <a href="https://braininspired.co/podcast/66/">Paul Cisek</a>
40:41 - <a href="https://braininspired.co/podcast/71/">Patrick Mayo</a>
41:51 - <a href="https://braininspired.co/podcast/27/">Konrad Kording</a>
43:22 - <a href="https://braininspired.co/podcast/84/">David Poeppel</a>
44:22 - <a href="https://braininspired.co/podcast/70/">Brad Love</a>
46:47 - <a href="https://braininspired.co/podcast/68/">Rodrigo Quian Quiroga</a>
47:36 - <a href="https://braininspired.co/podcast/82/">Steve Grossberg</a>
48:47 - <a href="https://braininspired.co/podcast/bi-004-mark-humphries-learning-to-remember/">Mark Humphries</a>
52:35 - <a href="https://braininspired.co/podcast/77/">John Krakauer</a>
55:13 - <a href="https://braininspired.co/podcast/84/">György Buzsáki</a>
59:50 - <a href="https://braininspired.co/podcast/62/">Stefan Leijnan</a>
1:02:18 - <a href="https://braininspired.co/podcast/37/">Nathaniel Daw</a></p>]]></description>
	<itunes:subtitle><![CDATA[In the 4th installment of our 100th episode celebration, previous guests responded to the question:



What ideas, assumptions, or terms do you think is holding back neuroscience/AI, and why?



As usual, the responses are varied and wonderful!







Ti]]></itunes:subtitle>
	<content:encoded><![CDATA[<p>In the 4th installment of our 100th episode celebration, previous guests responded to the question:</p>



<p><strong>What ideas, assumptions, or terms do you think is holding back neuroscience/AI, and why?</strong></p>



<p>As usual, the responses are varied and wonderful!</p>







<p>Timestamps:</p>



<p>0:00 - Intro
6:41 - <a href="https://braininspired.co/podcast/81/">Pieter Roelfsema</a>
7:52 - <a href="https://braininspired.co/podcast/11/">Grace Lindsay</a>
10:23 - <a href="https://braininspired.co/podcast/23/">Marcel van Gerven</a>
11:38 - <a href="https://braininspired.co/podcast/52/">Andrew Saxe</a>
14:05 - <a href="https://braininspired.co/podcast/83/">Jane Wang</a>
16:50 - <a href="https://braininspired.co/podcast/55/">Thomas Naselaris</a>
18:14 - <a href="https://braininspired.co/podcast/bi-001-steven-potter-brains-in-dishes/">Steve Potter</a>
19:18 - <a href="https://braininspired.co/podcast/26/">Kendrick Kay</a>
22:17 - <a href="https://braininspired.co/podcast/9/">Blake Richards</a>
27:52 -<a href="https://braininspired.co/podcast/30/"> Jay McClelland</a>
30:13 - <a href="https://braininspired.co/podcast/75/">Jim DiCarlo</a>
31:17 - <a href="https://braininspired.co/podcast/44/">Talia Konkle</a>
33:27 - <a href="https://braininspired.co/podcast/63/">Uri Hasson</a>
35:37 - <a href="https://braininspired.co/podcast/58/">Wolfgang Maass
</a>38:48 - <a href="https://braininspired.co/podcast/66/">Paul Cisek</a>
40:41 - <a href="https://braininspired.co/podcast/71/">Patrick Mayo</a>
41:51 - <a href="https://braininspired.co/podcast/27/">Konrad Kording</a>
43:22 - <a href="https://braininspired.co/podcast/84/">David Poeppel</a>
44:22 - <a href="https://braininspired.co/podcast/70/">Brad Love</a>
46:47 - <a href="https://braininspired.co/podcast/68/">Rodrigo Quian Quiroga</a>
47:36 - <a href="https://braininspired.co/podcast/82/">Steve Grossberg</a>
48:47 - <a href="https://braininspired.co/podcast/bi-004-mark-humphries-learning-to-remember/">Mark Humphries</a>
52:35 - <a href="https://braininspired.co/podcast/77/">John Krakauer</a>
55:13 - <a href="https://braininspired.co/podcast/84/">György Buzsáki</a>
59:50 - <a href="https://braininspired.co/podcast/62/">Stefan Leijnan</a>
1:02:18 - <a href="https://braininspired.co/podcast/37/">Nathaniel Daw</a></p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1185/100-4.mp3" length="62161652" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[In the 4th installment of our 100th episode celebration, previous guests responded to the question:



What ideas, assumptions, or terms do you think is holding back neuroscience/AI, and why?



As usual, the responses are varied and wonderful!







Timestamps:



0:00 - Intro
6:41 - Pieter Roelfsema
7:52 - Grace Lindsay
10:23 - Marcel van Gerven
11:38 - Andrew Saxe
14:05 - Jane Wang
16:50 - Thomas Naselaris
18:14 - Steve Potter
19:18 - Kendrick Kay
22:17 - Blake Richards
27:52 - Jay McClelland
30:13 - Jim DiCarlo
31:17 - Talia Konkle
33:27 - Uri Hasson
35:37 - Wolfgang Maass
38:48 - Paul Cisek
40:41 - Patrick Mayo
41:51 - Konrad Kording
43:22 - David Poeppel
44:22 - Brad Love
46:47 - Rodrigo Quian Quiroga
47:36 - Steve Grossberg
48:47 - Mark Humphries
52:35 - John Krakauer
55:13 - György Buzsáki
59:50 - Stefan Leijnan
1:02:18 - Nathaniel Daw]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2021/03/art-100-4-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:04:26</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[In the 4th installment of our 100th episode celebration, previous guests responded to the question:



What ideas, assumptions, or terms do you think is holding back neuroscience/AI, and why?



As usual, the responses are varied and wonderful!







Timestamps:



0:00 - Intro
6:41 - Pieter Roelfsema
7:52 - Grace Lindsay
10:23 - Marcel van Gerven
11:38 - Andrew Saxe
14:05 - Jane Wang
16:50 - Thomas Naselaris
18:14 - Steve Potter
19:18 - Kendrick Kay
22:17 - Blake Richards
27:52 - Jay McClelland
30:13 - Jim DiCarlo
31:17 - Talia Konkle
33:27 - Uri Hasson
35:37 - Wolfgang Maass
38:48 - Paul Cisek
40:41 - Patrick Mayo
41:51 - Konrad Kording
43:22 - David Poeppel
44:22 - Brad Love
46:47 - Rodrigo Quian Quiroga
47:36 - Steve Grossberg
48:47 - Mark Humphries
52:35 - John Krakauer
55:13 - György Buzsáki
59:50 - Stefan Leijnan
1:02:18 - Nathaniel Daw]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2021/03/art-100-4-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 100.3 Special: Can We Scale Up to AGI with Current Tech?</title>
	<link>https://braininspired.co/podcast/100-3/</link>
	<pubDate>Wed, 17 Mar 2021 14:03:02 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1183</guid>
	<description><![CDATA[<p>Part 3 in our 100th episode celebration. Previous guests answered the question: </p>



<p><strong>Given the continual surprising progress in AI powered by scaling up parameters and using more compute, while using fairly generic architectures (eg. GPT-3):</strong></p>



<p><strong>Do you think the current trend of scaling compute can lead to human level AGI?</strong> <strong>If not, what's missing?</strong></p>



<p>It likely won't surprise you that the vast majority answer "No." It also likely won't surprise you, there is differing opinion on what's missing.</p>







<p>Timestamps:</p>



<p>0:00 - Intro
3:56 - <a href="https://braininspired.co/podcast/58/">Wolgang Maass
</a>5:34 - <a href="https://braininspired.co/podcast/29/">Paul Humphreys</a>
9:16 - <a href="https://braininspired.co/podcast/90/">Chris Eliasmith</a>
12:52 - <a href="https://braininspired.co/podcast/52/">Andrew Saxe</a>
16:25 - <a href="https://braininspired.co/podcast/72/">Mazviita Chirimuuta</a>
18:11 - <a href="https://braininspired.co/podcast/bi-001-steven-potter-brains-in-dishes/">Steve Potter</a>
19:21 - <a href="https://braininspired.co/podcast/9/">Blake Richards</a>
22:33 - <a href="https://braininspired.co/podcast/66/">Paul Cisek</a>
26:24 - <a href="https://braininspired.co/podcast/70/">Brad Lov</a>e
29:12 -<a href="https://braininspired.co/podcast/30/"> Jay McClelland</a>
34:20 - <a href="https://braininspired.co/podcast/73/">Megan Peters</a>
37:00 - <a href="https://braininspired.co/podcast/18/">Dean Buonomano</a>
39:48 - <a href="https://braininspired.co/podcast/44/">Talia Konkle</a>
40:36 - <a href="https://braininspired.co/podcast/82/">Steve Grossberg</a>
42:40 - <a href="https://braininspired.co/podcast/37/">Nathaniel Daw</a>
44:02 - <a href="https://braininspired.co/podcast/23/">Marcel van Gerven</a>
45:28 - <a href="https://braininspired.co/podcast/54/">Kanaka Rajan</a>
48:25 - <a href="https://braininspired.co/podcast/77/">John Krakauer</a>
51:05 - <a href="https://braininspired.co/podcast/68/">Rodrigo Quian Quiroga</a>
53:03 - <a href="https://braininspired.co/podcast/11/">Grace Lindsay</a>
55:13 - <a href="https://braininspired.co/podcast/27/">Konrad Kording</a>
57:30 - <a href="https://braininspired.co/podcast/17/">Jeff Hawkins</a>
102:12 - <a href="https://braininspired.co/podcast/63/">Uri Hasson</a>
1:04:08 - <a href="https://braininspired.co/podcast/51/">Jess Hamrick</a>
1:06:20 - <a href="https://braininspired.co/podcast/55/">Thomas Naselaris</a></p>]]></description>
	<itunes:subtitle><![CDATA[Part 3 in our 100th episode celebration. Previous guests answered the question: 



Given the continual surprising progress in AI powered by scaling up parameters and using more compute, while using fairly generic architectures (eg. GPT-3):



Do you thi]]></itunes:subtitle>
	<content:encoded><![CDATA[<p>Part 3 in our 100th episode celebration. Previous guests answered the question: </p>



<p><strong>Given the continual surprising progress in AI powered by scaling up parameters and using more compute, while using fairly generic architectures (eg. GPT-3):</strong></p>



<p><strong>Do you think the current trend of scaling compute can lead to human level AGI?</strong> <strong>If not, what's missing?</strong></p>



<p>It likely won't surprise you that the vast majority answer "No." It also likely won't surprise you, there is differing opinion on what's missing.</p>







<p>Timestamps:</p>



<p>0:00 - Intro
3:56 - <a href="https://braininspired.co/podcast/58/">Wolgang Maass
</a>5:34 - <a href="https://braininspired.co/podcast/29/">Paul Humphreys</a>
9:16 - <a href="https://braininspired.co/podcast/90/">Chris Eliasmith</a>
12:52 - <a href="https://braininspired.co/podcast/52/">Andrew Saxe</a>
16:25 - <a href="https://braininspired.co/podcast/72/">Mazviita Chirimuuta</a>
18:11 - <a href="https://braininspired.co/podcast/bi-001-steven-potter-brains-in-dishes/">Steve Potter</a>
19:21 - <a href="https://braininspired.co/podcast/9/">Blake Richards</a>
22:33 - <a href="https://braininspired.co/podcast/66/">Paul Cisek</a>
26:24 - <a href="https://braininspired.co/podcast/70/">Brad Lov</a>e
29:12 -<a href="https://braininspired.co/podcast/30/"> Jay McClelland</a>
34:20 - <a href="https://braininspired.co/podcast/73/">Megan Peters</a>
37:00 - <a href="https://braininspired.co/podcast/18/">Dean Buonomano</a>
39:48 - <a href="https://braininspired.co/podcast/44/">Talia Konkle</a>
40:36 - <a href="https://braininspired.co/podcast/82/">Steve Grossberg</a>
42:40 - <a href="https://braininspired.co/podcast/37/">Nathaniel Daw</a>
44:02 - <a href="https://braininspired.co/podcast/23/">Marcel van Gerven</a>
45:28 - <a href="https://braininspired.co/podcast/54/">Kanaka Rajan</a>
48:25 - <a href="https://braininspired.co/podcast/77/">John Krakauer</a>
51:05 - <a href="https://braininspired.co/podcast/68/">Rodrigo Quian Quiroga</a>
53:03 - <a href="https://braininspired.co/podcast/11/">Grace Lindsay</a>
55:13 - <a href="https://braininspired.co/podcast/27/">Konrad Kording</a>
57:30 - <a href="https://braininspired.co/podcast/17/">Jeff Hawkins</a>
102:12 - <a href="https://braininspired.co/podcast/63/">Uri Hasson</a>
1:04:08 - <a href="https://braininspired.co/podcast/51/">Jess Hamrick</a>
1:06:20 - <a href="https://braininspired.co/podcast/55/">Thomas Naselaris</a></p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1183/100-3.mp3" length="66265848" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Part 3 in our 100th episode celebration. Previous guests answered the question: 



Given the continual surprising progress in AI powered by scaling up parameters and using more compute, while using fairly generic architectures (eg. GPT-3):



Do you think the current trend of scaling compute can lead to human level AGI? If not, what's missing?



It likely won't surprise you that the vast majority answer "No." It also likely won't surprise you, there is differing opinion on what's missing.







Timestamps:



0:00 - Intro
3:56 - Wolgang Maass
5:34 - Paul Humphreys
9:16 - Chris Eliasmith
12:52 - Andrew Saxe
16:25 - Mazviita Chirimuuta
18:11 - Steve Potter
19:21 - Blake Richards
22:33 - Paul Cisek
26:24 - Brad Love
29:12 - Jay McClelland
34:20 - Megan Peters
37:00 - Dean Buonomano
39:48 - Talia Konkle
40:36 - Steve Grossberg
42:40 - Nathaniel Daw
44:02 - Marcel van Gerven
45:28 - Kanaka Rajan
48:25 - John Krakauer
51:05 - Rodrigo Quian Quiroga
53:03 - Grace Lindsay
55:13 - Konrad Kording
57:30 - Jeff Hawkins
102:12 - Uri Hasson
1:04:08 - Jess Hamrick
1:06:20 - Thomas Naselaris]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2021/03/art-100-3-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:08:43</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Part 3 in our 100th episode celebration. Previous guests answered the question: 



Given the continual surprising progress in AI powered by scaling up parameters and using more compute, while using fairly generic architectures (eg. GPT-3):



Do you think the current trend of scaling compute can lead to human level AGI? If not, what's missing?



It likely won't surprise you that the vast majority answer "No." It also likely won't surprise you, there is differing opinion on what's missing.







Timestamps:



0:00 - Intro
3:56 - Wolgang Maass
5:34 - Paul Humphreys
9:16 - Chris Eliasmith
12:52 - Andrew Saxe
16:25 - Mazviita Chirimuuta
18:11 - Steve Potter
19:21 - Blake Richards
22:33 - Paul Cisek
26:24 - Brad Love
29:12 - Jay McClelland
34:20 - Megan Peters
37:00 - Dean Buonomano
39:48 - Talia Konkle
40:36 - Steve Grossberg
42:40 - Nathaniel Daw
44:02 - Marcel van Gerven
45:28 - Kanaka Rajan
48:25 - John Krakauer
51:05 - Rodrigo Quian Quiroga
53:03 - Grace Lindsay
55:13 - Konrad Kor]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2021/03/art-100-3-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 100.2 Special: What Are the Biggest Challenges and Disagreements?</title>
	<link>https://braininspired.co/podcast/100-2/</link>
	<pubDate>Fri, 12 Mar 2021 15:24:11 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1181</guid>
	<description><![CDATA[<p>In this 2nd special 100th episode installment, many previous guests answer the question: What is currently the most important disagreement or challenge in neuroscience and/or AI, and what do you think the right answer or direction is? The variety of answers is itself revealing, and highlights how many interesting problems there are to work on.</p>



<p>Timestamps:</p>



<p>0:00 - Intro
7:10 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/68/" target="_blank">Rodrigo Quian Quiroga</a>
8:33 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/72/" target="_blank">Mazviita Chirimuuta</a>
9:15 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/90/" target="_blank">Chris Eliasmith</a>
12:50 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/75/" target="_blank">Jim DiCarlo</a>
13:23 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/66/" target="_blank">Paul Cisek</a>
16:42 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/37/" target="_blank">Nathaniel Daw</a>
17:58 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/51/" target="_blank">Jessica Hamrick</a>
19:07 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/92/" target="_blank">Russ Poldrack</a>
20:47 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/81/" target="_blank">Pieter Roelfsema</a>
22:21 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/27/" target="_blank">Konrad Kording</a>
25:16 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/89/" target="_blank">Matt Smith</a>
27:55 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/32/" target="_blank">Rafal Bogacz</a>
29:17 -<a rel="noreferrer noopener" href="https://braininspired.co/podcast/77/" target="_blank"> John Krakauer</a>
30:47 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/23/" target="_blank">Marcel van Gerven</a>
31:49 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/23/" target="_blank">György Buzsáki</a>
35:38 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/55/" target="_blank">Thomas Naselaris</a>
36:55 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/82/" target="_blank">Steve Grossberg</a>
48:32 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/84/" target="_blank">David Poeppel</a>
49:24 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/71/" target="_blank">Patrick Mayo</a>
50:31 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/62/" target="_blank">Stefan Leijnen</a>
54:24 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/77/" target="_blank">David Krakuer</a>
58:13 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/58/" target="_blank">Wolfang Maass</a>
59:13 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/63/" target="_blank">Uri Hasson</a>
59:50 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/bi-001-steven-potter-brains-in-dishes/" target="_blank">Steve Potter</a>
1:01:50 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/44/" target="_blank">Talia Konkle</a>
1:04:30 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/21/" target="_blank">Matt Botvinick</a>
1:06:36 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/70/" target="_blank">Brad Love</a>
1:09:46 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/53/" target="_blank">Jon Brennan</a>
1:19:31 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/11/" target="_blank">Grace Lindsay</a>
1:22:28 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/52/" target="_blank">Andrew Saxe</a></p>]]></description>
	<itunes:subtitle><![CDATA[In this 2nd special 100th episode installment, many previous guests answer the question: What is currently the most important disagreement or challenge in neuroscience and/or AI, and what do you think the right answer or direction is? The variety of answ]]></itunes:subtitle>
	<content:encoded><![CDATA[<p>In this 2nd special 100th episode installment, many previous guests answer the question: What is currently the most important disagreement or challenge in neuroscience and/or AI, and what do you think the right answer or direction is? The variety of answers is itself revealing, and highlights how many interesting problems there are to work on.</p>



<p>Timestamps:</p>



<p>0:00 - Intro
7:10 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/68/" target="_blank">Rodrigo Quian Quiroga</a>
8:33 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/72/" target="_blank">Mazviita Chirimuuta</a>
9:15 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/90/" target="_blank">Chris Eliasmith</a>
12:50 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/75/" target="_blank">Jim DiCarlo</a>
13:23 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/66/" target="_blank">Paul Cisek</a>
16:42 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/37/" target="_blank">Nathaniel Daw</a>
17:58 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/51/" target="_blank">Jessica Hamrick</a>
19:07 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/92/" target="_blank">Russ Poldrack</a>
20:47 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/81/" target="_blank">Pieter Roelfsema</a>
22:21 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/27/" target="_blank">Konrad Kording</a>
25:16 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/89/" target="_blank">Matt Smith</a>
27:55 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/32/" target="_blank">Rafal Bogacz</a>
29:17 -<a rel="noreferrer noopener" href="https://braininspired.co/podcast/77/" target="_blank"> John Krakauer</a>
30:47 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/23/" target="_blank">Marcel van Gerven</a>
31:49 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/23/" target="_blank">György Buzsáki</a>
35:38 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/55/" target="_blank">Thomas Naselaris</a>
36:55 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/82/" target="_blank">Steve Grossberg</a>
48:32 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/84/" target="_blank">David Poeppel</a>
49:24 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/71/" target="_blank">Patrick Mayo</a>
50:31 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/62/" target="_blank">Stefan Leijnen</a>
54:24 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/77/" target="_blank">David Krakuer</a>
58:13 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/58/" target="_blank">Wolfang Maass</a>
59:13 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/63/" target="_blank">Uri Hasson</a>
59:50 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/bi-001-steven-potter-brains-in-dishes/" target="_blank">Steve Potter</a>
1:01:50 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/44/" target="_blank">Talia Konkle</a>
1:04:30 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/21/" target="_blank">Matt Botvinick</a>
1:06:36 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/70/" target="_blank">Brad Love</a>
1:09:46 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/53/" target="_blank">Jon Brennan</a>
1:19:31 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/11/" target="_blank">Grace Lindsay</a>
1:22:28 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/52/" target="_blank">Andrew Saxe</a></p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1181/100-2.mp3" length="81907837" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[In this 2nd special 100th episode installment, many previous guests answer the question: What is currently the most important disagreement or challenge in neuroscience and/or AI, and what do you think the right answer or direction is? The variety of answers is itself revealing, and highlights how many interesting problems there are to work on.



Timestamps:



0:00 - Intro
7:10 - Rodrigo Quian Quiroga
8:33 - Mazviita Chirimuuta
9:15 - Chris Eliasmith
12:50 - Jim DiCarlo
13:23 - Paul Cisek
16:42 - Nathaniel Daw
17:58 - Jessica Hamrick
19:07 - Russ Poldrack
20:47 - Pieter Roelfsema
22:21 - Konrad Kording
25:16 - Matt Smith
27:55 - Rafal Bogacz
29:17 - John Krakauer
30:47 - Marcel van Gerven
31:49 - György Buzsáki
35:38 - Thomas Naselaris
36:55 - Steve Grossberg
48:32 - David Poeppel
49:24 - Patrick Mayo
50:31 - Stefan Leijnen
54:24 - David Krakuer
58:13 - Wolfang Maass
59:13 - Uri Hasson
59:50 - Steve Potter
1:01:50 - Talia Konkle
1:04:30 - Matt Botvinick
1:06:36 - Brad Love
1:09:46 - Jon Brennan
1:19:31 - Grace Lindsay
1:22:28 - Andrew Saxe]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2021/03/art-100-2-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>01:25:00</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[In this 2nd special 100th episode installment, many previous guests answer the question: What is currently the most important disagreement or challenge in neuroscience and/or AI, and what do you think the right answer or direction is? The variety of answers is itself revealing, and highlights how many interesting problems there are to work on.



Timestamps:



0:00 - Intro
7:10 - Rodrigo Quian Quiroga
8:33 - Mazviita Chirimuuta
9:15 - Chris Eliasmith
12:50 - Jim DiCarlo
13:23 - Paul Cisek
16:42 - Nathaniel Daw
17:58 - Jessica Hamrick
19:07 - Russ Poldrack
20:47 - Pieter Roelfsema
22:21 - Konrad Kording
25:16 - Matt Smith
27:55 - Rafal Bogacz
29:17 - John Krakauer
30:47 - Marcel van Gerven
31:49 - György Buzsáki
35:38 - Thomas Naselaris
36:55 - Steve Grossberg
48:32 - David Poeppel
49:24 - Patrick Mayo
50:31 - Stefan Leijnen
54:24 - David Krakuer
58:13 - Wolfang Maass
59:13 - Uri Hasson
59:50 - Steve Potter
1:01:50 - Talia Konkle
1:04:30 - Matt Botvinick
1:06:36 - Brad Love
1:09:46 - ]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2021/03/art-100-2-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>

<item>
	<title>BI 100.1 Special: What Has Improved Your Career or Well-being?</title>
	<link>https://braininspired.co/podcast/100-1/</link>
	<pubDate>Tue, 09 Mar 2021 22:18:23 +0000</pubDate>
	<dc:creator><![CDATA[Paul Middlebrooks]]></dc:creator>
	<guid isPermaLink="false">https://braininspired.co/?post_type=podcast&#038;p=1179</guid>
	<description><![CDATA[<p>Brain Inspired turns 100 (episodes) today! To celebrate, my <a rel="noreferrer noopener" href="https://www.patreon.com/braininspired" target="_blank">patreon supporters</a> helped me create a list of questions to ask my previous guests, many of whom contributed by answering any or all of the questions. I've collected all their responses into separate little episodes, one for each question. Starting with a light-hearted (but quite valuable) one, this episode has responses to the question, "In the last five years, what new belief, behavior, or habit has most improved your career or well being?" See below for links to each previous guest. And away we go...</p>



<p>Timestamps:</p>



<p>0:00 - Intro
6:13 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/77/" target="_blank">David Krakauer</a>
8:50 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/84/" target="_blank">David Poeppel</a>
9:32 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/30/" target="_blank">Jay McClelland</a>
11:03 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/71/" target="_blank">Patrick Mayo</a>
11:45 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/23/" target="_blank">Marcel van Gerven</a>
12:11 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/9/" target="_blank">Blake Richards</a>
12:25 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/77/" target="_blank">John Krakauer</a>
14:22 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/57/" target="_blank">Nicole Rust</a>
15:26 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/73/" target="_blank">Megan Peters</a>
17:03 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/52/" target="_blank">Andrew Saxe</a>
18:11 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/33/" target="_blank">Federico Turkheimer</a>
20:03 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/68/" target="_blank">Rodrigo Quian Quiroga</a>
22:03 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/55/" target="_blank">Thomas Naselaris</a>
23:09 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/bi-001-steven-potter-brains-in-dishes/" target="_blank">Steve Potter</a>
24:37 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/70/" target="_blank">Brad Love</a>
27:18 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/82/" target="_blank">Steve Grossberg</a>
29:04 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/44/" target="_blank">Talia Konkle</a>
29:58 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/66/" target="_blank">Paul Cisek</a>
32:28 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/54/" target="_blank">Kanaka Rajan</a>
34:33 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/11/" target="_blank">Grace Lindsay</a>
35:40 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/27/" target="_blank">Konrad Kording</a>
36:30 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/bi-004-mark-humphries-learning-to-remember/" target="_blank">Mark Humphries</a></p>]]></description>
	<itunes:subtitle><![CDATA[Brain Inspired turns 100 (episodes) today! To celebrate, my patreon supporters helped me create a list of questions to ask my previous guests, many of whom contributed by answering any or all of the questions. Ive collected all their responses into separ]]></itunes:subtitle>
	<content:encoded><![CDATA[<p>Brain Inspired turns 100 (episodes) today! To celebrate, my <a rel="noreferrer noopener" href="https://www.patreon.com/braininspired" target="_blank">patreon supporters</a> helped me create a list of questions to ask my previous guests, many of whom contributed by answering any or all of the questions. I've collected all their responses into separate little episodes, one for each question. Starting with a light-hearted (but quite valuable) one, this episode has responses to the question, "In the last five years, what new belief, behavior, or habit has most improved your career or well being?" See below for links to each previous guest. And away we go...</p>



<p>Timestamps:</p>



<p>0:00 - Intro
6:13 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/77/" target="_blank">David Krakauer</a>
8:50 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/84/" target="_blank">David Poeppel</a>
9:32 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/30/" target="_blank">Jay McClelland</a>
11:03 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/71/" target="_blank">Patrick Mayo</a>
11:45 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/23/" target="_blank">Marcel van Gerven</a>
12:11 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/9/" target="_blank">Blake Richards</a>
12:25 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/77/" target="_blank">John Krakauer</a>
14:22 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/57/" target="_blank">Nicole Rust</a>
15:26 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/73/" target="_blank">Megan Peters</a>
17:03 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/52/" target="_blank">Andrew Saxe</a>
18:11 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/33/" target="_blank">Federico Turkheimer</a>
20:03 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/68/" target="_blank">Rodrigo Quian Quiroga</a>
22:03 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/55/" target="_blank">Thomas Naselaris</a>
23:09 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/bi-001-steven-potter-brains-in-dishes/" target="_blank">Steve Potter</a>
24:37 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/70/" target="_blank">Brad Love</a>
27:18 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/82/" target="_blank">Steve Grossberg</a>
29:04 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/44/" target="_blank">Talia Konkle</a>
29:58 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/66/" target="_blank">Paul Cisek</a>
32:28 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/54/" target="_blank">Kanaka Rajan</a>
34:33 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/11/" target="_blank">Grace Lindsay</a>
35:40 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/27/" target="_blank">Konrad Kording</a>
36:30 - <a rel="noreferrer noopener" href="https://braininspired.co/podcast/bi-004-mark-humphries-learning-to-remember/" target="_blank">Mark Humphries</a></p>]]></content:encoded>
	<enclosure url="https://braininspired.co/podcast-download/1179/100-1.mp3" length="41139882" type="audio/mpeg"></enclosure>
	<itunes:summary><![CDATA[Brain Inspired turns 100 (episodes) today! To celebrate, my patreon supporters helped me create a list of questions to ask my previous guests, many of whom contributed by answering any or all of the questions. I've collected all their responses into separate little episodes, one for each question. Starting with a light-hearted (but quite valuable) one, this episode has responses to the question, "In the last five years, what new belief, behavior, or habit has most improved your career or well being?" See below for links to each previous guest. And away we go...



Timestamps:



0:00 - Intro
6:13 - David Krakauer
8:50 - David Poeppel
9:32 - Jay McClelland
11:03 - Patrick Mayo
11:45 - Marcel van Gerven
12:11 - Blake Richards
12:25 - John Krakauer
14:22 - Nicole Rust
15:26 - Megan Peters
17:03 - Andrew Saxe
18:11 - Federico Turkheimer
20:03 - Rodrigo Quian Quiroga
22:03 - Thomas Naselaris
23:09 - Steve Potter
24:37 - Brad Love
27:18 - Steve Grossberg
29:04 - Talia Konkle
29:58 - Paul Cisek
32:28 - Kanaka Rajan
34:33 - Grace Lindsay
35:40 - Konrad Kording
36:30 - Mark Humphries]]></itunes:summary>
	<itunes:image href="https://braininspired.co/wp-content/uploads/2021/03/art-100-1-01.jpg"></itunes:image>
	
	<itunes:explicit>false</itunes:explicit>
	<itunes:block>no</itunes:block>
	<itunes:duration>00:42:32</itunes:duration>
	<itunes:author><![CDATA[Paul Middlebrooks]]></itunes:author>	<googleplay:description><![CDATA[Brain Inspired turns 100 (episodes) today! To celebrate, my patreon supporters helped me create a list of questions to ask my previous guests, many of whom contributed by answering any or all of the questions. I've collected all their responses into separate little episodes, one for each question. Starting with a light-hearted (but quite valuable) one, this episode has responses to the question, "In the last five years, what new belief, behavior, or habit has most improved your career or well being?" See below for links to each previous guest. And away we go...



Timestamps:



0:00 - Intro
6:13 - David Krakauer
8:50 - David Poeppel
9:32 - Jay McClelland
11:03 - Patrick Mayo
11:45 - Marcel van Gerven
12:11 - Blake Richards
12:25 - John Krakauer
14:22 - Nicole Rust
15:26 - Megan Peters
17:03 - Andrew Saxe
18:11 - Federico Turkheimer
20:03 - Rodrigo Quian Quiroga
22:03 - Thomas Naselaris
23:09 - Steve Potter
24:37 - Brad Love
27:18 - Steve Grossberg
29:04 - Talia Konkle
29:58 - Paul Ci]]></googleplay:description>
	<googleplay:image href="https://braininspired.co/wp-content/uploads/2021/03/art-100-1-01.jpg"></googleplay:image>
	<googleplay:explicit>No</googleplay:explicit>
	<googleplay:block>no</googleplay:block>
</item>
	</channel>
</rss>
