Brain Inspired
Brain Inspired
BI 162 Earl K. Miller: Thoughts are an Emergent Property
Loading
/

Support the show to get full episodes and join the Discord community.

Check out my free video series about what’s missing in AI and Neuroscience

Earl Miller runs the Miller Lab at MIT, where he studies how our brains carry out our executive functions, like working memory, attention, and decision-making. In particular he is interested in the role of the prefrontal cortex and how it coordinates with other brain areas to carry out these functions. During this episode, we talk broadly about how neuroscience has changed during Earl’s career, and how his own thoughts have changed. One thing we focus on is the increasing appreciation of brain oscillations for our cognition.

Recently on BI we’ve discussed oscillations quite a bit. In episode 153, Carolyn Dicey-Jennings discussed her philosophical ideas relating attention to the notion of the self, and she leans a lot on Earl’s research to make that argument.  In episode 160, Ole Jensen discussed his work in humans showing that  low frequency oscillations exert a top-down control on incoming sensory stimuli, and this is directly in agreement with Earl’s work over many years in nonhuman primates. So we continue that discussion relating low-frequency oscillations to executive control. We also discuss a new concept Earl has developed called spatial computing, which is an account of how brain oscillations can dictate where in various brain areas neural activity be on or off, and hence contribute or not to ongoing mental function. We also discuss working memory in particular, and a host of related topics.

0:00 – Intro
6:22 – Evolution of Earl’s thinking
14:58 – Role of the prefrontal cortex
25:21 – Spatial computing
32:51 – Homunculus problem
35:34 – Self
37:40 – Dimensionality and thought
46:13 – Reductionism
47:38 – Working memory and capacity
1:01:45 – Capacity as a principle
1:05:44 – Silent synapses
1:10:16 – Subspaces in dynamics

Transcript

Earl    00:00:03    And we really had kind of a clockwork view with the brain, where, you know, the brain is composed of a bunch of parts, and you figure out what each of the parts do. We’ll figure out how the, how the brain works. But now the field is very much shifted. Now we’re no longer focused on individual parts. It’s all about emergent properties. To the extent that you have control over your own thoughts, he needs something that’s a controlling signal, a top-down feedback, controlling signal. And right now, this alpha beta signal is the, is a good candidate for it. Don’t think of alpha or beta as having one function in the brain. It doesn’t. What alpha, beta versus gamma is, are different energy states of the network.  

Paul    00:00:48    Good day everyone. This is brain inspired. I’m Paul. My guest today is Earl Miller. Earl runs the Miller lab at m i t, where he studies how our brains carry out our executive functions, like working memory, attention decision making. In particular, he’s interested in the role of the prefrontal cortex and how it coordinates with other brain areas to carry out these functions. During this episode, we talk broadly about how neuroscience has changed, um, throughout Earl’s career and how his own thoughts have changed along with that. Uh, one thing that we focus on is the increasing appreciation of brain oscillations for our cognition. Recently on brain inspired, we’ve discussed oscillations quite a bit. On episode 153, Carolyn Dicey Jennings discussed her philosophical ideas relating attention to the notion of the self. And she leans a lot on Earl’s research, uh, with oscillations to make that argument.  

Paul    00:01:47    In episode 160, Lia Jensen discussed his work in humans, uh, showing that low frequency oscillations exert a top-down control on incoming sensory stimuli. Uh, and this is in direct agreement with Earl’s work, uh, over many years in non-human primates. So we continue that discussion relating low frequency oscillations to executive control. Uh, we also discuss a new concept, uh, Earl has developed called spatial computing, which is an account of how brain oscillations can dictate where in various brain areas neural activity, uh, can be on or off. And, uh, hence contribute or not to ongoing mental function. We also discuss working memory in particular, and a host of related topics. Show notes are at brand inspired.co/podcast/ 162. Recently, a listener donated a, uh, generous amount to brand inspired, and when I thanked them, uh, I also noted that <laugh>, if half of my listeners were a 10th as generous, um, I’d be able to continue doing brand inspired indefinitely. Anyway, if you want to ensure that this podcast continues, consider, consider supporting it even for a tiny amount. It helps way more than, you know. Learn more about that@brandinspired.co. Thank you to everyone who has, uh, or continues to support the show. Uh, I deeply appreciate it and of course, I appreciate Earl spending this time with me. And here he is.  

Paul    00:03:20    It’s President’s Day, so the university is closed. Of course, you’re still in lab, but, um, my, my main pressing question first of all is have you played your bass guitar yet? Today?  

Earl    00:03:31    Today? No, not today. Tonight I’ll  

Paul    00:03:34    Play Do you play daily?  

Earl    00:03:35    Pretty much, yeah.  

Paul    00:03:37    Do you have a, do you have a show tonight or is that, is that when you practice?  

Earl    00:03:40    Is that night? No, no. Um, this is just, you know, my hobbies. So tonight I’ll be playing, you know, alone. Uh, you know, I play with a local band here. We, uh, we used to play a lot before, like gig, regularly before Covid. Then Covid sort of knocked everything down, but we’re all busy professionals as fellow neuroscientists and a neurologist and stuff like that. So we’re all busy, so we don’t get a chance to play even every week, but we play every few weeks.  

Paul    00:04:04    How long have you been practicing playing,  

Earl    00:04:07    Playing bass, guitar,  

Paul    00:04:09    Bass? Well, I, I don’t know what you started with. Have you been playing instrument since you were a kid?  

Earl    00:04:13    Yes. I actually, I, I had many years playing trumpet. I was a real band geek in junior high school and high school. So I took private lessons. I was in orchestra, I was in swing band, I was in jazz band. I was in a chamber ensemble. And then, um, I went to, uh, graduate school and sort of, uh, everything but science sort of fell by the wayside. And about 10 years ago, I decided I wanted to start playing again. And I still have my old trumpet, but I wanted to pick up a new instrument. I always loved the bass. And I have a bunch of friends who sing and play guitar, and they’re always looking for a bass player. So there you go.  

Paul    00:04:45    Oh, cuz everyone plays guitar. Yeah.  

Earl    00:04:47    <laugh>. Yeah. Shake a tree. 10 guitars will fall out.  

Paul    00:04:50    <laugh>. That’s right. That’s what I play. I I have this guitar hanging right next to me and I haven’t played it, uh, in a few days and I need to get back on it. So maybe I’ll just take some inspiration from you.  

Earl    00:04:59    There you go. Uh,  

Paul    00:05:00    I mean, before we start talking about science, I, I’m just kind of curious. I’ve had people tell me in the past that, uh, playing music ha you know, affects their other cognitive, um, functions. Right? So do, do you feel like musicianship, um, has influenced the way that you move about your science career?  

Earl    00:05:22    Well, I can’t say cause I don’t have a control group. Um, however, I I will say that, you know, when you’re a kid and you’re learning music, I mean, it teaches you discipline, it teaches you focus, teaches you concentration. If you wanna get ahead, you gotta practice regular, you gotta follow a regimen. You can’t just play random stuff. So it really teaches you how to, uh, how to focus and stay on task.  

Paul    00:05:41    Hmm. I had a friend who told me that most, uh, that playing, if you play music, um, it increases the likelihood that you’ll become a medical doctor. I know that you, you started in, uh, in like pre-med, right?  

Earl    00:05:54    I did, yeah. But I think it’s true. There’s, there’s a lot, lot of scientists I notice who play, who playing music. In fact, you know, in, in the Cognitive Neuroscience Society meeting is end the end of March. And I’m playing in the, in the, um, scientist band Pavlov’s dogs. We’re doing a gig at the Hyatt Regency on Saturday, March 25th, 9:30 PM in the, in the ballroom San Francisco. If you’re hear at the of neuroscience site reading, it’s where all the cool kids will be.  

Paul    00:06:18    Okay. I’ve never had someone plug music on on the podcast. That’s a first. That’s, that’s great. <laugh> <laugh>. So, um, a recent review that, uh, you co-authored on the, um, on working memory begins over 30 years ago. Working memory was solved mm-hmm. <affirmative>. And so we’re gonna, we’re gonna talk a lot about your work on working memory. Okay. But just in, in a very broad picture, how have your views on not only working memory, but just the brain and mind, how have they changed over many years?  

Earl    00:06:50    Oh, dramatically. When I was a graduate student back in the 20th century, in the mid eighties when I was a graduate student, late eighties, let’s be fair, um, the, the focus was on the brain individual parts. State-of-the-art was single electrode recording. You record from one neuron at a time. We focus on individual neurons, we focus on individual brain areas. And we really had kind of a clockwork view of the brain where, you know, the brain is composed of a bunch of parts, and we figure out what each of the parts do. We’ll figure out how the, how the brain works now. And things have changed dramatically since then. Not, not saying that’s wrong, there is some truth to that, but also as foundational, it’s a necessary step. You have to figure out the parts before you figure out the hole. But now the field has very much shifted.  

Earl    00:07:33    Now we’re no longer focused on individual parts. It’s all about emergent properties. You know, what, how the, what the parts do, the properties that, that the, um, brain parts have when they’re working together as a whole. Things you can’t tell by focusing on, on one, they’re on or one brain area at a time. And one of those things is, uh, is, um, things like oscillatory dynamics, which is, which are, are, uh, you know, your brain is a, a wash in these electric fields that are constantly fluctuating rhythmically, anywhere from one time a second to a hundred times a second or more. And when I was a graduate student, naturally, you know, since we were focused on individual neurons and oscillations are an emergent property, they were dismissed as like the humming of a car engine. They, they reflect the engine running, but they don’t actually make the engine run.  

Earl    00:08:18    Um, that analogy isn’t perfect because actually the os the vibrations engine has to be tuned and north doesn’t shake itself to deaths. And so the vibrations reinforcement, so actually the oscillations actually do help engines run. But leaving that analogy aside for the moment, um, is becoming increasingly clear to us that, that these oscillations are, are highly functional or, and there’s no reason to think they’re not high, highly functional. And they’re every reason to think they are. I mean, the brain is a, your brain is an electrical chemical machine neurons spike when their membrane potential reaches a certain threshold. And these neurons are resting in, in the other neurons that are creating electric fields. And what electric fields do when, when they fluctuate, they’re moving the membrane threshold further and closer away to the spiking threshold. You have to be, I mean, they, it has to, it’s, excuse me, it’s inevitable. They have gonna have a role in brain function mm-hmm. <affirmative>. But now that we are, um, sort of moving on to the 20 20th century, neuroscience is all about the parts. And now it’s all about the whole, about the emergent properties, about network properties and figuring out how things work together. And that’s an important part of it.  

Paul    00:09:20    Do you, uh, I know this is an unfair question, but even when I, so I was a, um, non-human primate, um, neurophysiologist. And even when I started, which was, you know, late two thousands, we were still recording single neurons, you know? Yep. And do you look back on those many years that you spent recording single neurons? Do you look back on it and think, I wish I’d spent my time differently? Or do you think that that was like a necessary step as technology, you know, increases and now we have a, a different view of the brain, but you, not, not all that was like just wasted time, right?  

Earl    00:09:51    No, no, no. It was foundational. You can’t figure out how the parts work together till you start figuring out the parts. And it’s clear that the, that the first, like one, one big revolution was, we used to think every neuron had one function. You know, you figure out what that neuron does and you’ll figure out what every neuron does. You’ll figure out the brain. And now we know there’s things like mixed selectivity, multifunctional neurons, all, all over cortex. But we couldn’t figure out that without getting well going through the first step. So, you know, I don’t think of the time we spent recording single neurons as time wasted. There was a time, you know, it was only as we see as 1962 that no one knew how to make single neurons activate. And people like Mount Castle and Hub and Diesel figure that out. We had to get through that. We had to figure out what neurons do in isolation before we could figure out, begin to figure out what they’re doing in concert with one another. So no, it wasn’t a waste of time. It was absolutely foundational. And there was a, uh, uh, and it was a, I mean, I’m, I’m proud of the work we did in, in the past, and we wouldn’t be here if we weren’t there.  

Paul    00:10:47    It’s the, so you, you were kind of referenced the single neuron doctrine that, that the old idea that each neuron, uh, has a specific function is dedicated to a function. And that was like the prevailing view. Was that like a ubiquitous, prevailing view or, you know, because that’s the view that I have also that that’s, that that’s the history. Right? But is it really that clean or did like the whole field just accept that that was probably the way things were? Or was there plenty of other, were there plenty of other ideas floating around?  

Earl    00:11:17    Well, nothing is ever that clean. I would say it, it was the, um, modal view in the field. Most people felt that way. But not everybody, I mean, people as early as, uh, Walter Freeman for example, or Donald Head talked about the role of electric fields in, in producing dynamics that, that integrate the activity of individual neurons. So my PhD advisor, Charlie Gross, used to like to say, there are no new ideas, only old ideas rediscovered <laugh>. But they had be discovered always. So back then, those people were on the fringe. And now it’s more like the common, now the people who think that single neurons only have one function, they’re the ones who are more on the fringe now. But that’s the way science works. It’s, it’s paradigm shifting.  

Paul    00:11:53    Mm-hmm. <affirmative>  

Earl    00:11:55    And you feel one way, and then the field changes. And we all feel somewhere else with, with a lot, with a lot of drama along the way.  

Paul    00:12:01    <laugh>. Yeah. A lot of drama. What about, um, action potentials? Do you still consider them the currency of the, of the brain given, you know, have we shifted away from thinking about a spike as the code of infor, you know, the information code  

Earl    00:12:17    As the code and the only code, I would say, yeah, but they’re an important code. I mean, spike, you don’t get electrical fields. You don’t get local field potentials about spiking. So it’s all, see, here’s the thing. The way we used to think about things was, was that spiking and spike rate was the only important thing. And all these other things were not doing anything. They were epiphenomenal, they, they were, could be ignored. Now we’re realizing that it’s all this together is what, what, what, what, how the rain works, it’s spiking and the fields they produce in dynamics they produce in the network properties. But you so to say that it’s all together, it’s all that together. It’s not just one, one thing or another. And I don’t know, you, you, we don’t know anything about how the brain works out. Let’s, let’s be perfectly honest, we’ve learned a lot in the past. We’ve changed a lot in the past 30 years. Um, but we still don’t, there’s still a lot we don’t know. And to my mind, a theory that explains more phenomenon is way better than a theory that explains one phenomenon and ignores everything else.  

Paul    00:13:11    Uh, I have trouble wrapping my head. Uh, so I agree with what you say, you know, thinking about across levels of scale and, uh, processes and thinking about it more of a, in a holistic way. But I have trouble wrapping my head around it and, and having it all in my working memory or something, you know, uh, <laugh> and feeling like comfortable, like I understand something. It’s like I can only grasp at a few, like at a section at a time. And it’s hard to piece it all together. Do you feel like you have it in your head in a comfortable space?  

Earl    00:13:38    Um, no. Uh, the brain is really complex and we’re not, we’re not gonna figure it out in my lifetime. And I’ve made peace with that. I mean, like, everything we’re doing now is a stepping stone to eventually figuring out more about how it works. And if the argument is, well, she had, so all this stuff is so complicated, we’re never gonna figure it out. You know, the next 10, 15, even 20 years. Well, you gotta get over that because the brain is not gonna be figured out in my lifetime, in your lifetime, even lifetime in the next generation of students. We’re all just contributing stepping stones to get to a, a greater truth later. And I’m, I’m so what do you, I’m perfectly comfortable with that.  

Paul    00:14:14    Well, I mean, you’ve made a, you’ve made a lot of progress in our understanding of the brain, much of what we’re gonna talk about here in a minute. But there are people who have kind of grand unified theories of the brain. What do, what do you think, not necessarily of those people, but of, you know, are we ready? What you would say is that we’re not ready for something like that?  

Earl    00:14:32    No, not at all. I think all along the way, we gotta generate hypotheses. We gotta generate paradigms, we gotta weigh models, ways of looking at the frameworks for looking at things, and then we test them. And then when they fall short, we replace them with new theories, frameworks, and hypotheses. So, no, I mean, you need people to come along with big ideas about, about what’s going on, just as a way of moving the field forward, even if we’re not quite there yet to really have a true unified theory.  

Paul    00:14:59    Okay. So it’s been, uh, over 20 years now. I, I realized, so Miller and Cohen 2001 is this classic paper proposing a function for the prefrontal cortex. And I realized you probably have graduate students now that were born after that, uh, paper came out. And I was trying to think  

Earl    00:15:17    Back, make feel old  

Paul    00:15:18    <laugh>. Well, I was trying to think back when I was a graduate student, like, if, if papers that were written like right before I was born, like had a big influence on me, and I’m not sure, but I wanted to ask you first of all, what, what the big idea was, uh, from that paper and then how you think and feel that it has held up over the years.  

Earl    00:15:39    Well, so the big idea when, when I first started, I actually started out working in the visual system and I got interested in, um, prefrontal cortex because of the work of people like Joaquin Fuster and Pat Goldman Orke. And I was a little more interested in like, the higher level cognition stuff. And I felt like when I was working in high level visual cortex, I thought I was looking at the, the influence of these high level cognitive phenomenon on, on visual processing. But I, but I wanted to get at the heart of the heart of the matter. So when I, um, moved up MIT to start my faculty position, I switched from visual cortex to prefrontal cortex. Okay.  

Paul    00:16:12    I was wondering how that came about because all your early earliest publications are all like in it, uh, infer temporal it cortex info temporal cortex. Yeah.  

Earl    00:16:21    And Charlie Gross’s lab, it’s, right. Yeah. And then I did my postoc with Bob Desmond’s lab. And same thing, I moved up to the prefrontal cortex and I started my, uh, faculty. I, I’ll tell you parenthetically, when I first moved, um, my assistant professor here at M I t and I told the department I was switching my department head, I was switching to, uh, um, prefrontal cortex. I got some blowback from that. They like, oh, you know, we hired you to do visual system stuff Oh, in our group and, you know, people who change fields in midstream, you know, they tend not to succeed. And I said, no, I’m gonna do it. Anyway,  

Paul    00:16:53    That’s, that’s, that’s kind of crazy cuz we’re just talking about a brain area switch. I mean, that, that’s part of the, the paradigm shift also is, um, you were talking about understanding the parts and you, you know, are focused a lot on prefrontal cortex, but that’s a part of, you know, uh, big networks. And so it’s weird to think of it in isolation and, you know, and, and it used to be that everyone, I don’t know if it’s still this way, you can tell me, but everyone kind of had their own special area that they were known for and that, and people, other people should stay away from my area. Let me, uh, record from that, that area.  

Earl    00:17:24    Yeah. I guess that that’s, that’s true to, to some extent. But you know, when I, um, even when we, we first, so when I started the lab in 1995, we were first working on the multiple electro techniques so we can been record for more than one neuro at, at, at a time. Yeah. And back then we were still focused on single areas cuz we only could stick our one or two or three in that case, four or five electrodes in a single area. But now that we now, now that now with the rise of multiple, multiple electro recording, we almost never focus. We, we may use prefrontal cortic as a fulcrum cuz we’re interested in like higher level cognitive functions, but we recorded in multiple rain areas later, our recent experiments are recording like six cortical areas and two subcortical areas simultaneously because the brain is networks all working together.  

Earl    00:18:08    And you gotta understand how all these things work together. If you wanna understand cognition, by the way, we got off top topic here. So we’re Yeah, I almost have to bring us back. Yeah, go ahead. Yeah. <laugh>. So when, when I started, um, working in prefrontal cordex, the state of the art was, um, working memory. That’s was work prefrontal cortic does working memory, it holds things in mind. Now we gotta, now just putting that in a bit, little bit like context, that was sort of the way it was felt. The prefrontal cortex is involved in holding things in your conscious mind so you could do stuff with it. Mm-hmm. <affirmative>, um, even so the original working memory model wasn’t just maintenance, it was also the executive part, the control of working memory. And I remember when I started in, in the prefrontal cortex, I thought, well, you know, it’s gotta be more than just holding stuff in mind.  

Earl    00:18:53    And the way, um, I think the one thing we did was that, um, one of the things we did was that, you know, we, neurophysiology is a bit need to tell you, neurophysiology is a bit like radar. The signal you get out depends on the signal you put into the brain <laugh>. And at the time, what people did is they stay, the art was having animal train a working memory task. And what you vary across trials is what information, what stems, what location, what object the animal holds a working memory. And that’s what you see from your electrodes is that stuff varying. The rest of it was all backdrop. Right? And I remember, I thought at the time, well this is all very interesting and stuff, but the animal knows how to do this working memory task. How does the animal know how to do the task?  

Earl    00:19:33    That seems to be were cognition is not the stuff that’s holding in mind, but it the but the operations, it learned to hold this stuff in mind and respond at the right time and stuff like that. So we started doing something that people hadn’t done before. We not only varied what object or picture stemless animals holding in mind, we also varied the rules the animal apply to it. We’ve out in a balanced way. And you can actually compare the two head to head. And when you do that, many more neurons in the prefrontal cortex cared more about the rules of the test than they did about the, uh, what the animals, the, the content, the information the animal was operating on. So that’s what led to Miller and Comb was that I, I thought, um, we had this, uh, feeling, well we gotta take the field into this other domain where we understand how the brain operates, how the brain knows what it learns, the rules of the game, how the brain learns, what it can do, what it does.  

Earl    00:20:23    And goal direction. Executive brain function is all about identifying goals and coming up with means to achieve them. And this is all acquired knowledge. So that led to this, um, insight that, that waan the prefrontal cortex does, is it learns the rules of the game. It figures out how the world works and uses that to like a puppet master to direct activity in the rest of cortex. And this isn’t, wasn’t completely in isolation. There was the bias competition model of DeSimone and Duncan where they said top-down signals, select things for visual attention. So it was kind of an, an extension of that. But I think the main insight was that the prefrontal cortex wasn’t learning these rules or the logic of a goal directed task. I wasn’t learning them as an esoteric set of logical operations. It was expressing these logical operation in terms of a map of which circuits in the rest of cortex need to activate to do that thing, to follow that rule. So we called them rule maps cuz they were, they were, the prefrontal cortex was absorbing this, the, the rules of the game, the goal oriented structure, the world above expressing it in terms of what circuits need to be activated in the rest of the brain as if this was like, had a, developed a roadmap or a, it was like a traffic cop developing a roadmap that was that, uh, that had, once it had this map, it could tell the rest of the cortex what to do. And that’s essentially the miller and cone model.  

Paul    00:21:39    And, and how do you, how do you feel that it has aged? Are you still, um, is there anything that you would have changed about it? Or are you think you got wrong?  

Earl    00:21:47    Well, I think the idea of the prefrontal cortex learns rules and l learns the goal-oriented structure of the world. I think that’s pretty much still true. How it does, it’s another story. Like there’s certainly there and we’re thinking about when we are writing a review that up updates a few things mm-hmm. <affirmative>, um, one thing we’ve, we’ve un at the time we underappreciated the role of subcortical structures in, in this process, you know, the, um, prefrontal, the cor cortex as a whole. Um, I look, I now view it as a, as a add-on extension, upgrade to the subcortical structures that we’re already doing a lot of these things before we evolved a cortex. Hmm. So you can’t understand the cortex don understanding that the basal ganglion, the thalamus, for example, the, um, you know, goal-directed executive functions grew out of, um, systems for voluntary motor control.  

Earl    00:22:34    It expanded on that. And that’s your cortex as it took these simple functions your sub cor cortex was doing, whether it’s, you know, voluntary movement or memory consolidation to the campus and added tissue that could ex more complex and a lot more tissue to get can, could expand upon those functions. So I would bring in the role of sober cortical structures in directness. And there’s evidence from that, uh, that the, like the anterior fla nuclei are there to help traffic feedback signals from the front of the brain to the back brain, these top-down executive control signals. And the other thing, if we go back to the oscillations thing, is how does the brain establish these, uh, pathways? How, how does the traffic cop tell which cortical, which circuits to activate in the rest of cortex? It can’t be anatomy alone because there’s lots of synapses in, in your brain, and it takes a long time to change those synapses.  

Earl    00:23:25    It takes a long time to build a, um, new pathways in your brain. So, so that’s where I bring in the role of OST story dynamics. That’s what we think what oscillation is doing. We think of now, we, I think are now we think of, um, of the cortex, brain anatomy is not being destiny, the possibility brain anatomy is like the road and highway system. It just says where traffic could go and your thoughts are where traffic actually does flow from moment to moment and something’s gotta direct that traffic. And we think these patterns of osci resonances neurons that fire neurons that hum together temporarily wired together, these shifting patterns of ost resons are what are what actually determines where the traffic flows on the infrastructure of its anatomy. And be bef And I’ll say one last thing before we move, move on, is that, um, that, um, uh, people, when I say this, people say, oh, you’re saying anatomy is important?  

Earl    00:24:17    No anatomy is important. You, the traffic doesn’t go anywhere without the infrastructure. And the infrastructure is constraining about where you build new roads or tear down own roads. That’s very, very important. But cognition is gonna be, how about how all this activity dances along all this infrastructure anatomy? And that’s what these emergent properties do, is they control their large scale oscillations, osci dynamics, local field potentials, electric fields, they are large scale organized changes of neuronal excitability. And that’s gonna be the kind of thing that’s going to, that to my mind, has to be the thing that, that that allows a sort of flexible direction of thoughts that we associate with executive brain functions and high level cognition.  

Paul    00:24:56    Well, there’s a lot of metaphors that you threw out there and, uh, another one that you frequently use as guardrails. You, you think of oscillations as guardrails for that traffic, but the traffic is still action potentials. You still think of that as the main currency traffic.  

Earl    00:25:10    Uh, yes, I do. I think that, again, you, without spiking you don’t have anything and, and, and, uh, these large scale organizations of site ability are gonna sculpt where, where the spikes flow in which neurons, um, become activated. In fact, we, um, have a paper that’s just about to come out a new theory called spatial computing. Oh, ask you about this. This is, this is sort of an extension of Miller and cone in the sense that, well, how does the brain use these large, these this direction, these large scale changes of excitability? How does it use that to do things? And the idea is that the way your brain does computation is by controlling where information is expressed in its networks  

Paul    00:25:48    That yeah. What you guys call spatial computing. Let, let’s go ahead and spatial computing, um, get to get to that. Um, so I, I recently had Ola Jensen on, um, the podcast talking about, and in his case he focuses on alpha, uh, oscillations, but I mean it’s basically alpha beta, these really slow oscillations as being a top, top-down, um, controlling signal much like you have developed in your own work. Um, whereas the, there are these high frequency gamma oscillations that are more the currency of incoming sensory stimuli that are being controlled by the, uh, alpha beta guardrails, right? Um, yeah. And, and so you ascribe to that story, um, and, uh, but spatial computing is on top of that. I is a much more nuanced, uh, location based, um, abilio. How, how do the beta waves, beta, alpha, beta waves know where to go and what to do, right?  

Earl    00:26:41    Well, first of all, let’s back up a moment and we talk, we talk about alpha beta ways. First of all, in frontal cortex, it’s beta, but we’ve done recordings that we record all for cortex. And as you move backwards in cortex, the frequencies drop a little bit. Um, so high gamma becomes slightly less high, gamma and beta becomes alpha. Now probably when we learn more about the brain, we’ll learn more about the subtleties of different, um, you know, frequency band to alpha, beta whatnot. But the thing keep in mind about alpha versus beta, this is that those are arbitrary frequency bands. That’s someone made up 150 years ago. So they don’t necessarily correspond to any real functional difference. The difference will be between lower and higher frequency oscillations. And think of them as not, don’t think of, um, and I think, well, Hensen would agree with me, don’t think of alpha or beta as having one function in the brain.  

Earl    00:27:28    It doesn’t, it it’s good for a particular function in the brain, which is top down control. What alpha, beta versus gamma is, are different energy states of the network. Okay? When the, when the, when the, um, networks are, when neurons are spiking at a, at a moderate rate, you get the frequency in the alpha beta range, and then there’s a sensory input, there’s more energy coming in, there’s more spiking than things go fly up to the, the, the gamma range. So think of it as, as alpha beta or gammas, different energy states of the network and the way you control gamma by forcing networks into either allowing networks to express that energy gamma, or you can use alpha beder to, uh, or to force networks in certain locations in cortex to go down to a lower energy state. And that prevents gamma from hap happening. So that’s the idea. And the idea behind s spatial computing is that, so the idea behind spatial computing in a nutshell, and it’s hard to express in a  

Paul    00:28:23    Nutshell, <laugh>, you can tell take it down.  

Earl    00:28:25    That’s fine. Yeah, I’ll try. Um, is that the information about the contents of thought, the, the things we see, the things we think about motor command, the, um, um, sensory information, the nut, the nuts in bolt, the primitives of, of how we perceive the world. That’s all stored in the brain in cortex, in high frequency components in the details of how individual neurons are, are wired together. And they’re expressed repeatedly all over cortex, all over cortical networks. Think of all these ensembles correspond to like a picture of an apple, for example. They will be expressed multiple, multiple places redundantly all, all over cortex in local, in in global networks, right? Um, and what think of, think of, think of those representations of stimuli as being like grains of sand, uh, in a big sandbox. And what, lemme  

Paul    00:29:16    Clarification question just so literally, like, let’s, let’s say Apple, um, is, has a thousand different repeated, uh, network structure motifs in your brain. Is that is what you’re saying? Yeah. Okay. That’s  

Earl    00:29:28    The idea. Sorry. That’s the idea. So you’ll be apple expressed all over the place. So now, um, now, um, what alpha and beta do is they control the alpha, the alpha basic is are a pattern signal. They’re a pattern signal to being imposed on cortex and what wherever alpha, beta lands in cortex, that forces a network, local network down to a lower energy state. So now gamma can’t be expressed, okay? And places where alpha beta isn’t is where gamma can be expressed. So think of this pattern of alpha beta as kind of a photographic negative of where you want information to be expressed in cortex. Mm-hmm. <affirmative>. So let’s go back to the grain of sand analogy. You have these, all the apples of oranges, all this stuff expressed repeatedly in multiple grains of sand all over his big sandbox. And the alpha beta signals is pattern signals acting more, uh, on, on a macro scale.  

Earl    00:30:15    So now think of a checkerboard or patchwork expressed on, on this grain of sa on the sandbox, right? So now if I, if I have one pattern of a, um, this checkerboard or patch pattern on the sandbox, then only certain parts of that sandbox can express apple or orange. And if I switch the pattern of this patchwork, then other parts of that sandbox will express apple and orange. So that’s why we think computation works. So, so one way to think about it is like, imagine I set up an animal on a task where the animal is, um, has to your simple task, the animal has to remember two pictures in the order in which I present them. So apple and orange or orange and apple or apple and house a house and apple. You get the, get the idea. So here’s how it works. So your brain, the brain’s already been trained on this task, it knows the operation.  

Earl    00:30:59    So here’s what happens. So now the first stem was about to come along and what the brain does is set up this pattern of alpha beta. So that, um, and let’s keep it very simple. Let’s say there’s just two patches, right? Mobile hands there. Okay? So you have two patches and one pattern of alpha beta activation corresponds to slot number one for the first object. So the first stems comes along in this task and the brain shifts its alpha, beta pattern to create this patch where the first, where the first stimulus is gonna be expressed. So the first stem comes along, it’s apple, and all the neurons in that patch that, that where the gamma is or beta is where everything else is. They, they, they go apple, apple, apple, cuz the animal sing the apple, right? Mm-hmm. <affirmative>. And they get primed. Then the second stimulus comes along and the better the brain shifts to another alpha beta pattern, and now a different patch correspond to slot number two.  

Earl    00:31:48    It ex it’s it’s allowed to express the representation of the second stimulus that comes along. Let’s say it’s, um, orange, orange, orange, orange, right? And now at the end of the delay, I wanna recall the am analyst to recall or me that matter, um, which stems was first. Which one second? Apple, orange. You, you re what you recreate the alpha beta pattern that corresponded to apple and all the neurons that first got activated when the apple was seeing were primed. Now they can say apple, apple, apple. And now I want to recall the second stimulus I shift to new alpha, beta pattern. And the neurons that were primed by orange can now can now be expressed their activity and say orange, orange, orange. So that’s just, that’s that the brain does operations, it its operating system works as computations work by controlling where in, where information is expressed in cortex. Now no doubt that’s gonna be way too simple, but at the end the brain represents information that’s, that’s, uh, doesn’t have leak, you know, calculating circuits in the brain as far as we, we, we, well may have a few, may have some, but, uh, contr representing information is a major way. The brain does things. So it seems to me that a lot of its operating systems should be in the control of representation.  

Paul    00:32:53    Hmm. I asked earlier this also, um, and he said that he couldn’t get rid of the homunculus. And so, you know, the question is like, well if, if we’re using beta, alpha, beta to configure the system in certain con control and configure, um, the content, where does that control come from? You know, um, do, do we still have the homunculus problem?  

Earl    00:33:17    If I had the answer to that question, we’d have the brain all figured out. We could up, we could all go home. So yes, obviously it’s not gonna be haunter, it’s gonna be an emergent property. And what is that emergent property of? Well, I’ll do some hand waving here. Um, there’s recurrent anatomically closed loop connections between the prefrontal cortex and, and the basal ganglia and thalamus. And there’s a big influx of dopamine which signals reward and reward prediction error. So it could tune up these, you know, first of all, when you see dopamine, it’s a dopamine can, can be a, um, permissive signal that the gating signal allows the brain to change and form these representations that you needed for high level control and the anatomic close anatomical loose between the cor patients like the prefrontal cortex and sub cortex. When, whenever, whenever I see close atomical loops in the brain, I think recurrent, I think re recurrent, recurrent processing, I think, and I think, um, uh, um, you know, the bootstrapping operations, right?  

Earl    00:34:11    Recursion, that’s the word I was looking for, recursion. Think of bootstrapping operations. So think about the atomical luine the prefrontal cortex down to the basal ganga back through the thalamus, back up the cortex again. It’s like a snake eating. Its taille. There’s these channels going through, there’s some crosstalk, but there’s some channels going going through there. And when you see something like that in the brain, what I think is recursive processing the system can learn something through what iteration. And then when it changes, that becomes father fodder for more material for another iteration, another iterations. And it’s gotta be something that’s open-ended that way. Some sort of bootstrapping or recursive process that allows the open-ended nature of, of human cognition. So control must come from some tuning up of these recurrent, uh, um, um, um, processing these, these bootstrapping operations that allow the brain to form these higher and higher representations that contain information about how to achieve goals.  

Earl    00:35:04    If you think we’ve done a lot of work on how the brain learns categories and concepts mm-hmm. <affirmative> cat versus dog numbers, same versus different. If you think about it, if I can identify a cat I’ve never seen before, I can now generalize across all the cats in the world and I have a, some template or some, um, you know, representation of my brain or what, what a what a what a what a cat is. Well, you know, goal direction, if you can generalize among a bunch of individuals in the present, it’s the same process allows you to generalize to the, to the, to the future. So I think it’s the same part parcel of the same operation.  

Paul    00:35:37    I, I, I recently also had Carolyn dicey Jennings on the podcast and she leans a lot. She’s a philosopher and she leans a lot on your work with Tim Bushman about, you know, this this alpha beta control oscillations story. Um, to argue that, uh, we have a self that that’s evident evidence that we have a self. And we, I know we are, you were just doing some hand waving and we don’t need to go down a long hand waving, uh, conversation. But I, i just wondering if you had any thoughts on, on that since I asked about, you know, where, where the control comes from and then I remembered, uh, her philosophical work suggesting not necessarily that it’s coming from the self, but that, that this is evidence for a self.  

Earl    00:36:15    Well, it’s beautiful and, and I love that work that that’s really, really nice work. I guess it all depends on, I’m about to get hand wavy here, how you define self. Yeah. But, uh, how do, how do you define self? I, I don’t know, but I, I like how do you define free will? This is a question we actually take up on one of the classes I teach. Yeah. Um, but um, you know, to the extent that you have control over your own thoughts, you need something that’s a controlling signal, a top-down feedback controlling signal. And right now this alpha beta signal is the, is a good candidate for it. And again, I wouldn’t say, but I don’t want to give anybody the impression that, oh, alpha beta does this one thing in the brain. It’s executive control. Now alpha beta is an energy state that’s useful for executive control and we could talk about why it’s low frequencies are better for executive control.  

Earl    00:37:00    The high frequencies, it’s probably because, uh, low frequencies are better for organizing large scale organization of um, uh, cuz it’s also when you have low frequencies, also larger spatial extent, it’s good for organizing large scale organization of, um, excitability in, in the, in the brain, right? I mean, if you think, if you look at a pond when it’s raining, there’s a bunch of high, like it’s drizzling have been a bunch of high frequency, um, ripples in the pond. But if a boat goes by, that boat’s gonna swamp out every, everything else with it’s awake. So that’s probably why low frequencies, um, are are more associated with, with executive control cuz they have a way of overriding and organizing, um, behavioral, um, neurophysiology, neuroactivity on a large scale.  

Paul    00:37:43    You, you also talk about the idea of dimensionality. Um, and I mean this goes back to the spatial computing, right? So you have this like really high dimensional, uh, neural activity, which is the content. Um, and then you have this alpha beta control, right? That is controlling that content. Um, and you’ve written about it as if, um, that alpha beta is, uh, serving as guardrails to, um, to funnel information along a lower dimension, um, mm-hmm <affirmative>. And is, is this why we we have access to only one or two things in our head at a given time? Is that, is that related to the funneling and, and how we can access the content?  

Earl    00:38:23    Yeah, that’s a great question. I’ll get the cognitive capacity in a second, but I wanna just, when you talk about the guardrails, one extension of this work is, so we’ve worked a lot on spiking in and LFPs. And by the way, the idea that LFPs, you know, obviously LFPs are gonna have a, um, a, um, causal role in where, um, spiking occurs cuz you look all over the brain, you see spiking phase locks to different o um, osci phases of ongoing, uh, LFP oscillations. So it seems like, you know, it is organizing, uh, sorry, go  

Paul    00:38:53    Ahead. Does everyone agree with you? Is that, is that a ubiquitous, a agree agreement in the field that LFPs are causal or, because some people might still say they’re epiphenomenal, right?  

Earl    00:39:03    Yeah, well, no, of course. But that this is where we get into paradigm shifting in, in Thomas Coon. People feel that way, I think because, uh, they’re still kind of like the vestiges of this old way of thinking about things that spikes in everything and individual neurons or everything. I mean, this is just physics. I mean, the electric fields work both directions. I mean, if you can, everybody agrees. You can read interesting information from LFPs and even electric fields. They say, oh, but it can’t work in the way it just works. Well, no, that’s not the way physics work. If you have these electrical fields that, and you could read information from, it works the other other way too. And, uh, so it, I mean it has, it’s pretty much inevitable, you know? And when, when I say that, people say, oh, but you look at, uh, you record from a single neuron in a dish and you see the action, but that’s alls huge.  

Earl    00:39:48    And the LFP is small. Well, yeah, that’s a single neuron. Your brain is not a single neuron. Your brain’s like a zillion neurons alls smushed together. Again, again, all interacting and, you know, um, these fe these effects add up in a very non-linear way. And I’ll put it this way, these fluctuations in electrical fields, they’re so strong that we can measure ’em outside the skull 150 years ago using crude equipment. And you’re gonna tell me that doesn’t have an influence inside the skull. No way, <laugh>, of course it does. You know, and also I I also on Twitter the other day, and somebody I wish I’d remember their name, it came credit for it, but someone posted a Twitter, an em, um, photograph of three neurons together, right? And the three neurons are together and, and they’re, they’re, they’re, they’re, their axons are, are forming synapses. Their dendrites, right? Mm-hmm. <affirmative>. But these three neurons, their bodies are smushed together really, really tight. The dendrites in the accents are around it like a halo. But the neuron bodies are smushed together. In fact, the neurons, they flatten their bodies out. So they have maximal contact between the soma. There’s no synaptic transmission going on there. You know, you’re gonna tell me neurons smushed together like that because everything is all spiking in synapses and releases neurotransmitters, obviously not. Sorry. So that was a, a, a sidetrack. What was the question  

Paul    00:41:05    Again? No, that, that’s okay. Um, well we were talking about the dimensionality and the capacity, um, at these signals. Yeah.  

Earl    00:41:12    So lady we’ll be working on with my colleague, uh, Demetrius pin at University College London, is that we’re taking these neural activity like LFPs and we’re extending it up a little bit to the, to electric fields, near electric fields in the brain. Now when I say electric fields, I’m not talking about electric fields. You read, you know, 10 feet away. I’m talking about near electric fields hovering around all this activity that neurons are doing. And what Che showed quite elegant in when a paper was published last year, is that if you look at things at the electrical field levels near electric field level, you can read the contents of working memory just like you can with single neurons. But here’s the thing, no representational drift little or no representational drift tho, those electric field signals are steady, steady, steady trial to trial to trial. And if you record on the single neuron level, you know this as well as I do, you get massive representational drift. You could record from a hundred neurons, a thousand neurons, a million neurons individually. And if you do the same trial 20 times in a row, you’ll get 20 different patterns of activity. Do it a thousand times or get a thousand different patterns of activity. The, the neurons think of the brain as like a giant orchestra, the neuron to individual players and the melody plays on, but the individual players come and go constantly.  

Paul    00:42:24    Right? There’s a real pain when you’re recording single neurons, uh, all day  

Earl    00:42:28    <laugh>. Yes, exactly. And, but when, when you get this higher level of electric fields, now you’re starting starting to get organization on a useful scale and you’re getting organization that’s stable and there’s no representational uh, drift. So cognitive function is gonna be at the level of this melody playing. It’s not gonna be the, it’s not gonna be the level of individual players. Cause individual players come and go constantly. And if you wanna have top down control, you know, executive control of thought, you can’t do it on the level of individual players cuz they’re fickle, they come into ocon. It’s gotta be on the level of, um, if you will, if I may carry the analogy further, orchestra of sections, this larger scale organization of activity. And that’s where the LFPs and, and electric fields come in. And it’s not to say that the neurons aren’t important cause you can’t get there without the spiking activity in the individual neurons. But I think if that’s the level in which a lot of interesting stuff happens, because that’s ave extremely useful level, the level of elect Le Fields and lfs.  

Paul    00:43:26    Hmm. I, I I think I’ve heard you talk about, you know, there’s the proverbial spotlight, uh, or you look for your keys under the lamp post. And that’s what we were doing when we were, uh, under the single neuron doctrine and recording single neurons and talking about that. I mean, we still are under, we’re just under a different lamp post now. But is it just a wider field of view or do you think that that lamp post will change again in the future? You know, now, now that we’re thinking of like these higher order, um, LFPs and, and larger scale, um, and high, high dimensional recordings that’s and so on.  

Earl    00:43:59    Oh, absolutely. I mean, this goes back to what I said earlier about us being stepping stones. I mean the, the, um, single neuron rate coating model was the dominant paradigm back in the 20th century. And now we’re more into this emergent property paradigm. Well, someday this paradigm will be replaced too, and it’ll be something else. Now, do I feel inadequate or depressed that one day the paradigm working under will be replaced? No, because we can’t get to that future paradigm without going through this one first. So we’re all just stepping stones and we all gotta kind of make peace with that. Um, yeah. So, um, and this is, is straight outta Thomas Coon. I mean, like, you know, like, uh, people, scientist is the scientist sciences is always this tension between tradition and rebellion, right? Tradition in the sense said, you gotta propose things that are reasonable.  

Earl    00:44:44    I’m not gonna say that, you know, spirits are controlling my executive functions. I’m gonna say something reasonable, like electric fields are because the brain’s electric. So that’s reasonable. That’s, that’s the appealing to tradition. It’s gotta be something that’s plausible, but at the same time it’s rebellion. Cause you, you’re job of science is to move forward beyond the, the current paradigm. So, you know, we can’t get too stuck in our, in our, uh, in our current paradigm. And when people say things like, oh, you know, um, LFPs and oscillations, they’re epiphenomenal. No one knows enough, not you, me, no one in this world knows enough about the brain to say something like that as an epi epiphenomenon, when I hear something’s an epiphenomenon, when someone says something’s epiphenomenon, I hear that doesn’t fit my model.  

Paul    00:45:27    Hmm. But we have to have those models, right? Just to think about. Yeah. Um, anything, yes, yes, that’s right. It’s a frustrating thing knowing that all models, our  

Earl    00:45:37    Models this model that, oh, abso all our models are wrong. And, and like, uh, some are more wrong than others though. But, um, uh, look, right now this what I’m, everything I’m telling you, you know, maybe long after I’m gone 10, 20, 30 years and people say, oh, that’s a, yeah, that was, that’s not the way things work. It’s, it’s, it’s, there was some truth to it. Just like there’s some truth to, there’s truth to the, um, uh, uh, individual neurons and spiking in rate coating that plays an important role of brain function. But it’s not everything. And what I’m proposing now is not everything either. And one day people will see it’s not everything and there’ll be a new paradigm, but that’s just the way science works. You gotta, you can’t hang on to old paradigms. You don’t, if you wanna hang on to old things, then join the clergy. You know,  

Paul    00:46:16    <laugh>, I, I know that we’re on an aside, but this is fun. So I’m gonna stay on it for a moment before I bring us back to capacity, which we never talked about, but Oh yeah. You know, you, you’ve often talked about how, I mean, you think that this is a paradigm change that you have experienced throughout your career going from that clockwork, like single neuron function view of the brain to now are more holistic. Uh, has that, um, and, and of course a paradigm shift is not a real thing, right? That that’s a model or metaphor itself, but I mean, we can think of it as a real thing. Do you think that that has been accompanied by a shift in thinking in a reductionist manner? Or is, is neuroscience still predominantly in a reductionism sort of regime? And and should it be if so or if not?  

Earl    00:47:01    Well, reductionism is important, especially in a, in a new science, especially when you figure out something’s complicated in the brain. Cuz again, you gotta figure out the fundamentals, how, how things work. I would say we’re still in a reductionistic phase. We’re just less reductionistic now because we’re still, there’s still a lot to figure out the brain. And, and there’s still a lot we don’t understand. So we’re still kind of in the, almost the cataloging sense that, uh, that we’re figuring out we’re, we’re describing phenomenon and cataloging them and we’re pr putting together into models and, and paradigms and frameworks. And, but they will, they’re, again, they’re only, our current theories are things to use to generate and test hypotheses. They’re not things, things to be cherished and preserved.  

Paul    00:47:41    <laugh>. Well said. Uh, okay. Alright, let’s get back to, uh, capacity. And, and this is one of the reasons why I wanted to talk about working memory is because, and I don’t think that we’ve even mentioned this yet, the old way of thinking, the classic way of thinking about working memory and how it works in brains is that you, um, while you’re thinking of something, your neuron concerned with that thing, apple is active throughout the period that you’re thinking of it, maintaining it in working memory. And then when you’re done with thinking of apples, then that neuron goes quiet. And that, that is the classic story based on neurologic, um, neurophysiology recordings and, um, o other means. And that story, that’s the story that has changed over time. And so this is kind of like what we’re going back to, uh, talking about the oscillations and, and maybe you can say a word about the capacity of our content versus our, uh, ability to, to access that content.  

Earl    00:48:36    Well, I think we’re talking about two things. We’re talking about the capacity, and we’re talking about the story about working memory, the working memory model, and now relayed to one that because that the capacity enters in the working memory. But stepping back for a moment, the view of working memory used to be persistent activity. You’re thinking about something and neurons are spiking, spiking, spiking, spiking, spiking, spiking. And then when you stop thinking about it, they stop spiking. Well, that largely is a vest. And by the way, that’s a 50 year old model from 1971. So, um, I think you’d, you’d be hard pressed to argue that we figured out everything 50 years ago. Essentially something high level cognition, like working in we’ve, we figured out 50 years ago and nothing has essentially changed. That’s just no way. Um, but what, what, what, what? That’s lar that view is largely a vestige of the single neuron approach because what do you do when you record from a single neuron?  

Earl    00:49:24    You, first of all, when I know this, cause you, you and I both spend a long time doing this. Have you only can record from one neuro at a time? You’re looking for neurons to do something interesting. Yeah. So first of all, you’re bias, you’re biasing your, you’re sampling towards the property you’re looking for. And then what you do, cuz it’s you’re only recording a civil neuron, is you record 30 trials or 25 trials, and you average across some get the average activity. Well, all that averaging of activity masks all these interesting dynamics that are going on because averaging throws all those dynamics away. And now that we’re recording from many n electrode simultaneously, we can’t select individual neurons for, so we’re doing more random sampling and be, we’re having enough power now statist power to look at things that on the individual trial level, what happens in real time.  

Earl    00:50:09    And what happens in real time is not persistent activity. You will find, you find that the bulk of neurons they fire just periodically and there’s lots of pauses in, in, in, during, during these memory delays. Now, you know, um, so now that’s not to say that spiking during memory delays is, again, there’s always truth to the old, old story. And this spiking during memory delays does underlie working memory. It’s just not persistent, spiking. There’s something much more complex going on there. There’s, there’s sparse firing. There’s this interplay, these alpha, beta and gamma dynamics as they go back and forth. There’s periods of spiking with gamma, then alpha comes along and then shuts it down, then, then it gets repressed again. That’s what actually goes on. If you look in real time at, uh, uh, um, and across many neurons during, uh, um, performance of working memory tasks, there’s, it’s much more complex than than we thought.  

Earl    00:51:02    And, uh, the, uh, so the question is how does the brain maintain memories with when the spiking is only sparse and there’s gaps in time with no spiking? Well, that comes from Lexi. One of the last papers at PATCO mesh, one of the people pioneers of neurophysiology working memory, she and her colleagues identified short-term septic plasticity mechanisms that when a neuron spike, they potentiate synaptic weights for about a little under about a second based on calcium dynamics. And, um, that’s the current thinking is that that’s what’s going on is, is the, uh, neuron spike periodically during this memory delay. Then the spiking has a temporary chain to snaps that essentially leaves an impression in the network of what of the spiking pattern has occurred. Then, then, then after a while, the system’s got spike again to refresh those weights. Okay. That seems to be the way that state the art about how work working memory works.  

Paul    00:51:53    So it needs to spike about once a second then to do that, or, or it spikes and it sets the weights and then they’re stored in what are called silent synapses, perhaps, in which we can talk about in a second, uh mm-hmm. <affirmative> or the, it doesn’t matter. The next time it spikes, it’ll reactivate that short-term weight. Is that how it works? So it’s doesn’t have to be once per second.  

Earl    00:52:12    It doesn’t be once per second. And if you’re holding multiple, you know, again, get the capacity thing. If you’re holding multiple things in the mind simultaneously, then it may not be, you have different spiking of different representations. In fact, that’s, uh, one of, one of the ways we think that it may work is that, so the problem with persistent spiking, the way you model persistent spiking is something called tractor dynamics. Tractor dynamics just a state of pattern of activity in a network. Right? Well, one thing computational modeling has shown is that tractor dynamics are a really bad way to do working memory because, um, because if there’s any overlap in two tractor dynamics, they swoosh together and, and, and you lose the information. Um, and one thing we have learned about higher level cortex is that there’s not isolated representations of one object versus another.  

Earl    00:52:54    There’s mixed selectivity and multifunctional neurons. So there’s a lot of overlap in, in representation. So one thought about how cap, why, why, uh, the brain works this way. First of all, lots of spiking costs, lots of energy. You don’t want the brain spiking constantly. So sparse, spiking is better. But beyond that, if you are trying to hold multiple things in mind, say two or three things in mind simultaneously with this sparse spiking, you can activate the one representation than the other, than the other, then the other, and they don’t overlap in time. So you don’t have this problem of a tractor dynamics smushing together together and, and messing, messing things up. And that may, that may, there’s one explanation of why we have this capacity limitation. There’s only so many, um, refresh rates you can have between spiking activity without the, without the, um, representation smushing together.  

Earl    00:53:39    So that’s one explanation. The other explanation that, um, that is not mutually exclusive could both be is that because it’s becoming increasingly clear that cognition is rhythmic, not just working memory, but cognition in general. For example, you know, we’ve shown this with alpha beta dynamics and Wisconsin chef shuffling going back and forth in alpha, beta and gamma. And, um, people like Sabine Kasner, for example, Princeton, she has studied sustained attention. The animal has to pay attention to a stimulus on a computer screen or location cure screen. Cause there’s gonna be a faint target there. And you gotta catch that target as soon as it, it occurs. Sustained detention should be the most sustained thing ever in the brain <laugh>. And when you look at it, it waxes and wanes and theta four times a second. Hmm, hmm. You know, so, so, uh, both behaviorally, your the animal’s better detecting the target than worse than better than worse.  

Earl    00:54:29    And those mirror the theta l fi elections going on in cortex at the same time. Now, if you really think about it a lot, that’s makes a lot of sense, because if I have sustained attention, sure I wanna pay attention here, but I can’t ignore everything else around me. So these periodic, um, ost story dynamics, um, allow the brain to free up for a moment and make and check everything else to make sure nothing else is, is going on that needs, needs attending. So it makes sense the brain would work in, in this, um, in this, uh, rhythmic way. And plus it does, I mean, your brain’s like a big rhythmic machine and it’s the most, obviously you recor from the brain. So it’s com increasing obvious to a many of us that the brain works not by continuous analog computation. It works by squirting these packets of information periodically around cortex, including in, in these, in these oscillatory.  

Earl    00:55:15    So there’s a, there’s a, uh, packet sent and paused and packet sent. That’s what these oscillations are doing. And if that’s the case, that means everything for the current contents of consciousness have to fit into one oscillatory cycle. Such explanation. And, um, if you think about it actually, because have a refractory period, actually Scotty gotta be half a oscillatory cycle. Mm-hmm. <affirmative>, so in Marcus Siegel was in my lab a number of years ago, Mar Mar Marcus Siegel, Melissa Warden, published this paper where the, the animal will hold two pictures in mind simultaneously in their order, much like I described you earlier with the spatial computing. And what they found was that there was these 30 hertz oscillations in prefrontal cortex and spiking that represented the first object for it’s a second object lined up on different phases of, of the 30 hertz oscillation. So what the cortex was doing was juggling this picture one and two, one picture one, picture two, and the cortex was juggling them 1, 2, 1, 2, 1 2 30 times a second. Mm-hmm. <affirmative>, that’s what the phase offset is. It’s the juggling act. The ju brain’s juggling get both my hands in the picture juggling them, uh, 30 times a second. Yeah. Right? And if that’s the way the high level cognition works, well there’s only so many balls you can juggle with in, in, in one wave.  

Paul    00:56:27    Right? W what is, so I’m kind of, now I’m kind of conflating the idea of, uh, different frequencies o of oscillations, high frequency and low frequency in, in the range with dimensionality. And I’m thinking, oh, is low frequency, does that mean low dimensionality and high frequency? Is that a high dimension, more high dimensional signal? Does that make sense?  

Earl    00:56:50    Uh, no, <laugh> that’s not the, not the way we, uh, think about it. This, this, um, the information expresses in, in these, um, neurons, the spiking of neurons and the connections between nulo form farming engrams, and that’s where the dimensionalities and dimensionality comes from. The mixed selectivity neurons that allow high dimensionality or allow dimensionality reduction depending on, on, on task demands. The, the way we think about the oscillations, that there’re a way of controlling the, um, ongoing activity in the brain, not so much, uh, to do with dimensionality reduction or dimensionality expansion,  

Paul    00:57:25    But, okay. So, but I thought you had said that the, like the low frequency oscillations are essentially take that high dimensional neural activity and funnel it into a lower dimensional state. Do I have that wrong?  

Earl    00:57:40    Oh, no, you’re right. But also, yeah, in that sense, yes. I see what you mean. So when, when you’re controlling, when you have a bunch of high dimensional information that could be expressed in your brain and you’re using these alpha oscillations to funnel or guideline provide these guide rails, you’re, you’re actually narrowing the focus of activity to, toward a small num a smaller amount of information. So in that sense, they do engage in dementia reduction.  

Paul    00:58:01    Okay. Yeah. So the, but  

Earl    00:58:02    I’m talking about more, I was talking about more after the information’s already been expressed. What, what, what’s high dimension versus low dimension,  

Paul    00:58:08    Right? What, uh, how, how low dimensional is our, uh, thinking is our conscious, uh, subjective experience. So if you, you know, if you can keep three to four or whatever, however many things in working memory, um, you know, do you think about our cognition, which is rhythmic? Do you think about it in terms of a certain, uh, band of frequency,  

Earl    00:58:30    <laugh> a certain band of frequency? Um,  

Paul    00:58:35    Well, are we, are we alpha? Do we think in alpha, do we think in theta? Like you were, you know, if, if that is rhythmic?  

Earl    00:58:41    No, we, we think, we think, we think with the combinations of all the, all these things, they’re all doing different things. Like the, um, again, the gamma and spiking in high energy states or information’s being expressed, alpha, beta helps control that. And theta, you get down the theta, theta probably has multiple functions too, know, like all the rest of these, um, bands of oscillations. And one thing the does is that gamma in cortex, um, is often cross frequency coupled the theta. And what idea for computational modeling is theta is helping pace when gamma can be expressed and alpha beta sort of sculpts on, on, on top of that. So it’s a way of getting another way of moving representations out in time so you don’t overlap with one another.  

Paul    00:59:16    Okay. So,  

Earl    00:59:17    Um, so thought is every thought of all this together, I would never reduce thought to just one frequency, one band or won anything.  

Paul    00:59:23    Yeah. But isn’t there, I, I really thought, and I can’t remember the source of this information, that there was a proposed rhythm of sort of our subjective ongoing thoughts, right? Happen at a, you know, within some range of oscillatory, uh, frequency, right? Like switching to thinking from cats to dogs and it’s, it’s not instantaneous. Yeah. And it’s not everything altogether.  

Earl    00:59:44    Well, a lot of this stuff is gonna depend on the demands of the task at hand. For example, one for the sustained the tension task, I, I was, uh, um, talking about with that Sabine Kaner performed where, uh, the tensions waxing awaiting at, um, theta, the ams tomic behavioral response, that’s gonna slow down the system a little bit. Um, Tim Bushman did a study in my laboratory where he studied covert attention. Now the animal has to search for a stimulus in an array of stimuli, like a where’s Waldo kind of thing. Um, only the animal’s not gonna mo is not allowed to move its eye. It has to search with its mind’s eye. So now you’re not worried about something physically moving, so now the system can move quicker. And we found that the spotlight of attention that was searching the mind’s eye, the eye is still, but the mind’s eye, the spotlight of attention moving around, searching for the stimulus that was operating around 20 hertz at a higher, higher, um, um, duty cycle.  

Earl    01:00:35    And probably because, you know, so the brain adopts whatever frequencies it can use for the task at hand. That when the system could work quicker, it’s gonna use a higher frequency. And when it has to slow down cuz of things like physical limitations is gonna, you’re gonna slower frequency. Perfect. And there, one, one thing I wanna mention, I keep thinking when haven’t got a chance to slip it in yet, is that, you know, there’s something called multiple realization in the brain. Nobody, everybody’s brain isn’t wired the same way, right? So you look at something like Eve Martyr’s work at Brandis where she’s studying the, uh, how piic motion, how food moves down the, uh, digestive track of, of, um, lobster, I think it is  

Paul    01:01:14    Lobsters and crabs.  

Earl    01:01:15    Yeah. Lobsters and k crabbs and there’s like three ganglion. Maybe you can, some of these details are right, so correct me if I’m wrong, there’s like three ganglion and she found there’s something like 50,000 ways you could tune up these three ganglion to produce the exact same function. Yeah. Yeah. And if you look, all of our brains are gonna be a little different. So there’s gonna, the principles are gonna be the same, but the exact details of how stuff is done is gonna vary a bit from person to person. That’s why I don’t wanna get hung up on, you know, beta versus alpha. Like, you know, it’s, yeah, probably the relative frequency is gonna matter. Not ex not a not a sharp dividing line. And all of our brains are gonna be a little bit different.  

Paul    01:01:49    Yeah. I’m, I’m glad you brought up eve martyr because you talk about this in terms of, um, anatomy as possibility, not destiny. And, and that is alongside the multiple realizability idea. And that’s something I’ve, uh, that’s, I’ve sort of shifted in my thought becoming more interested in the idea of capacity as a principle, right? Instead of thinking, you know, the, the, the nature of the, um, relationship between structure and function has always plagued, uh, neuroscience, right? But if you start thinking of structure more as possibility or capacity, that kind of frees you up a little bit or me in my thinking about it. And I just think it’s a beautiful principle.  

Earl    01:02:26    Yep, yep. Yeah. And like you ask like why we, um, are are so single-minded, we are incredibly single-minded. I mean, like, uh, when you say the humans can hold four or five things in mind simultaneously, that’s under the best of circumstances When you’re testing somebody with like a bunch of colored squares on a computer screen, just which color changed when it comes to real complex thought? We are incredibly single minded. So there’s, there’s um, at least when comes to consciousness and cognition, there is a single mindedness and a single track kind of thing. Something that can funnels activity in a way that’s a, that allows us to, uh, engage in things we’re consciously aware of. And that seems to be highly limited in, in capacity. And it maybe do, cuz you need to funnel this information in a certain way along this infrastructure of the road and highway system.  

Earl    01:03:12    Um, yeah. So that’s a, and you know, people often bit parenthetically here, one thing I I used to give this public lecture on how you shouldn’t try to multitask cuz you can’t. Right? And, and no one can, you know, and, uh, we’re, we’re very single, single minded creatures and we, you know, but we had, we, our brains crave information. Our brains evolved in an environment where there wasn’t a lot of information available. You know, we only could direct action to one thing at a time. So our brains probably evolved with those kind of constraints and said, okay, this is the, this is the job at hand, so why not be single mind getting am anthropomorphic here, but why not be single mind and why not develop this way of, uh, of, of doing cognition That’s single mind to focus. So we didn’t really need much of anything else at the time.  

Earl    01:03:56    We didn’t grow up in this, uh, we didn’t involve our brains in involve in this environment where there’s everything available simultaneously, all all vying for, for, for our, our attention. So why do we have a capacity limitation for consciousness and cognition? I don’t know the way our brains, we could, we could explain now using these things, like kind of the explanations I gave him, why, how the brain operates and why that would be the case, but why it got to this case, why it evolved this way. Maybe just life is like occurred. Vk novel things happen randomly and everything changes, you know,  

Paul    01:04:28    But why do we want to multitask? Why if multitasking is so suboptimal, why do we seem to want to do it if we’re just screwing ourselves over?  

Earl    01:04:36    Yeah. And that’s because again, this environment, our brain’s involved in, we evolved in an information poor environment where, you know, um, there wasn’t a lot of things to pay attention to, but something that comes along, some information might be really, really important. Like the rustling in the bushes could be a tire’s gonna leap out at you. So our brains also, it’s a, it’s a paradox. Our brains evolved, you know, it’s a single mindness, but we also evolved this thirst for new knowledge cuz new, new knowledge was adaptable, it was adaptable and may save our lives. Mm-hmm. <affirmative>. Um, so that’s why, and now we are in a very different environment that our brain’s involved in, whether it’s not, you know, it’s not information poor, it’s information over overloaded. So we crave it and we can’t help ourselves because our brains evolve to think that, oh, any new information must be really, really, really, really important and we can’t turn it off.  

Paul    01:05:23    But in reality, Twitter is useless.  

Earl    01:05:26    <laugh>, I’m not gonna, well,  

Paul    01:05:28    Okay. I didn’t, I thought I could get you on there, but,  

Earl    01:05:30    Okay. All right.  

Paul    01:05:31    I’ll just state it then. All right, Earl. Um, I’m aware of our time, but there are a couple more, um, things about working memory that I would love for you to just discuss a little bit, uh, because our conception of working memory has changed, um, through this, what we’re calling a paradigm shift. One idea that, um, I’d love for you to discuss is the idea, idea of silent synapses. And I know that you guys worked with this and, and built, uh, a recurrent neural network, um mm-hmm. <affirmative> that with and without short-term syn, synaptic simplicity like you were talking about, and found, um, some effects of, of the short term sy synaptic plasticity based on these silent synapses. So what is the idea of the a silent synapse? And, and then maybe you can discuss that work  

Earl    01:06:15    Well in, in this work. What we, we ask the question, um, how is it that the brain can hold things in working memory without, with through these gaps in time of, of no spiking? That’s where these silence synapsis come in. So what the spiking is doing is sort of temporarily setting synaptic weights. Tan is leaving this impression so that when another bolus of activity comes in, the neurons now expressing infor, the syn is now expressed the information they have. And we started that work by trying to solve a simple question like what are the mechanisms that allow the brain to hold things in mind consciously with these, with these gaps in time? And, and when we, we, we tested neural networks with or without the, the old way of thinking, persistent activity, tractor dynamics, no synaptic plasticity. Dust neurons are constantly spiking. We compare those models against models that had short-term synaptic plasticity.  

Earl    01:07:05    And what’s interesting is that both tasks can solve, um, the working memory. In the case of the tractor dynamics, as long as you’re holding one thing in mind, it can solve working memory. Once you put a second thing in there, all bets are off. So that’s something else we could talk about. But, um, they could both, could solve a simple working memory task. But what we found is that you add in this short term sn plasticity, and there’s all these other benefits that, that uh, that little to do with working memory are, are good for network functioning in general. Uh, um, a adding in the short term snth plasticity makes, um, networks, networks deal with, um, noise better so that you could add noise into the input. And it doesn’t bother these networks. Once they have a short term, short term type of plasticity, they can, they can, they can deal with the noise.  

Earl    01:07:47    And the other thing it does, it allows, allows grateful, graceful degradation of networks. So in our study, when we had our models that had persistent activity alone, tractor dynamics, if you delete, randomly delete as little as 10% of the synapses, then the network falls apart. Can’t do the task anymore. But yet in the short term synaptic plasticity, and now you could blow away 40% of the synapses and the network is doing just fine and grateful degradation is gotta be the way your brain works cuz your brain’s constantly shuffling and losing, uh, uh, neurons. I know mine is. Uh, so, uh, you know, that’s an important principle of, uh, of network. So just right there tells you there’s something, something must be going on.  

Paul    01:08:26    Well, you mentioned, uh, Goldman Rakesh and alongside the idea of silent synapses, but I, I thought Mark Stokes had a a lot to do with the idea.  

Earl    01:08:35    Oh, he did? Yeah. No, um, PAC when Ru I’m talking about she wrote, had this one paper, one of her last papers before she was tragically killed by a, a car crossing the street. Um, and she showed this in slices from the prefrontal cortex that you, there’s these calcium di MXs that, that can do short-term potentiation, uh, synapses. Mark Stokes really gets a lot of credit for really introducing the idea of activity silent, um, um, model in the working memory. And he showed in a, in a series of really elegant work that you get, not only is working memory, activity silent, like we’re describing activity than quiet, but also the persistent activity. The tractor dynamic model is you, you, it’s like a latch circuit in the brain. The stimulus comes along, you flip a light switch, and it, and activity simply maintains that stimulus as it is until the, until the memory layer is over.  

Earl    01:09:25    Well, mark also showed that it’s not the case at all if you really look carefully and look at, you know, groups of neurons, you could see the dynamic, the representations, what the neurons doing are constantly shifting and changing and evolving over time. Hmm. So it’s, it’s working memory is not persistent. It’s activity silent and it’s not steady state. It’s a, it’s a lot, it’s a lot of changing and sh and shifting dynamics and everything that we’ve done since Mark first proposed this activity, silent model has supported that. We see the same things. So put on Mark, Mark Stokes tragically died recently. He was a, a real pioneering researcher and he was one of these people who, um, you know, he was bold enough to stick his thumb in the eye of dogma. Something I obviously appreciate. Um, he was bold enough to stick his thumb in the eye of dogma and try to tear on dogma, but he didn’t just, he wasn’t just a naysayer. He actually followed it up with really elegant work that that supported his point. Yeah. So he will be sorely missed. Yeah.  

Paul    01:10:21    Yeah. I, I I had read that he recently passed as well. I think it was cancer. Right. Um, so thinking about the dynamics, we, we talk a lot about dynamics on this podcast, and you were talking about how they’re constantly shifting and, and, you know, looking at the dynamics is possible because of these high, uh, dimensional recordings that we can now do where we’re recording tons of neurons. Um, and so that’s what another thing I was gonna ask you about to, to describe is when you guys look at the dynamics, um, you see like different sub spaces and the different components of working memory are shifting and this is where the goal comes in and that the actual decision to, uh, to make a decision to make a movement or something, um, that the dynamics switch between, uh, from encoding to the moment of decision. Maybe you can describe that much better than I just butchered it.  

Earl    01:11:11    Yeah. The subspace coating is really interesting development. It’s something that we’re starting to look at. Tim Bushman has done some excellent work on sub subspace coating, really elegant stuff. Like one, in one study he has the animal, um, pay attention to a STEMIs ignoring another one in one condition that’s visual attention and a working memory task where you give the animal two stimuli, the middle of working memory delay, you say you can forget one, forget one of those. And what he shows is that the cortex goes in his different subspace, one for relevant stimuli and one for irrelevant stimuli parks, things in the relevant subspace domain when you don’t need to pay attention. It doesn’t get rid of the what’s it, because who knows, you know, who knows what’s gonna happen in the future. You may need it again. But the brain sort of parks it in this, in this, uh, in this, um, subspace.  

Earl    01:11:53    Um, and he has a recent paper which, which found mindblowing. It’s a, it’s, it’s a bio archives hasn’t, hasn’t been, um, uh, peer reviewed yet. Um, but it’s, he shows that different cortical areas in different subspaces can simultaneously talk to other cortical areas and different subspaces at the same time. And you read something like that and you think, God, yeah, that’s the way the brain should work. Yeah. You know, that’s, it’s gotta be complex like that, you know. So the subspace stuff, you know, for many, many years we talked about, you know, just changes in rate coding, visual attention is an example. Yeah. I pay, I pay attention to something and there’s lots of activity to that stimulus and I ignore it. And there’s little no activity to that stimulus and was all just more activity, less activity. Now with starting to feel like we are looking at the shadows of the cave wall, if I may use another analogy, um, and that we’re looking at a, a, a impoverished glimpse of what’s actually going on.  

Earl    01:12:43    And, and, and the subspace coating is more, it’s more rich, it’s more complex and seems to be more attuned to the kind of things that the brain would need to do. Now, what subsidies code news people don’t know, essentially what it is, is dif different patterns of activity. If you have a certain pattern of activity across group neurons, that’s one subspace a different pattern will reduce another subspace. So it’s not just which neurons are activated or how they’re activated, it’s how they’re activated in relationship to one another. What, what patterns they form. And that’s another example of an emergent property that we could not get by studying one learner at a time.  

Paul    01:13:16    How dimensional, how high dimensional are subspace? How many subspaces can we have?  

Earl    01:13:20    That’s a good question. Don’t know. I mean, this work is fairly new. I guess that’s something, that’s something else that could feed into this question of, um, of why we have, um, capacity limitations and cognition.  

Paul    01:13:31    Do, do you think that the dynamical systems theory approach, you know, um, studying the dynamics in these, uh, different state spaces and sub spaces, do you think that that has ushered in a new neuroscience era? And, and, and sorry to add another kind of question to that, but you know, some people think that, um, it’s sort of a s stepping stone to thinking about cognition, tying, tying neural activity to cognition. It’s sh it’s, we should think more in terms of relating thoughts and cognition to these sort of dynamical sub state spaces than mm-hmm. <affirmative> to the activity of a bunch of neurons.  

Earl    01:14:09    Um, certainly that’s what subspace coating is all about. It’s, it’s looking at another dimension of, it’s not just neurons of firing when they fire. It’s, it’s, um, it’s these patterns of firing and how they, how they form this emergent property of, of a, of a group of neurons working together. And, um, if subspace coating didn’t happen, now, it was bound to happen inevitably, because the brain’s gotta be working that level of complexity. It can’t just be individual neurons turning on, on and off and conveying information. It’s gotta be something in the configuration of which neurons are firing. And that’s what a subspace code is.  

Paul    01:14:43    What do you think, um, the fu how do I ask this? So, uh, I, I asked you about your 2001 classic, which is, what is it like the one top five most cited neuroscience papers? Is that still correct?  

Earl    01:14:55    Last I checked fifth most cited paper in the history of neuroscience Man. But who, who’s counting  

Paul    01:15:00    <laugh>? Yeah, that’s right. First of all. So, um, I don’t, I don’t know if this is gonna sound rude or not, but do you feel like, you know, like when someone writes, uh, a hit song, right? And then they don’t have another hit song for, you know, decades, not that you, you’ve been writing hit song after hit song in science. Don’t get me wrong.  

Earl    01:15:18    Are you saying I haven’t any hit songs since 2001?  

Paul    01:15:21    <laugh>, let me back up. Let me back up. No, I’m just, so what are talking  

Earl    01:15:24    Curious about then, <laugh>,  

Paul    01:15:25    I, I’m curious how you, how you view that. Um, you know, I don’t know how artists view it when they have like their most popular song, right? Is they, they wrote 30 years ago or something and they’ve been putting out great work, uh, they got sober and they’re still putting out great work. No, I’m just kidding about the sober <laugh>. Uh, but I’m, I’m wondering like how you reflect on that and then what you think moving forward if you’re like itching to write another paper that, you know, whether that just gets cited, um, as much. Right? I I, I’m not talking about the quality of your work, I’m just talking about the reflection on like what you want moving forward and how you reflect on that.  

Earl    01:16:00    Well, I’m not trying to chase glory here. I’m trying to chase science and like, like I would, I’m thinking about doing an update to the Miller and co model where we’re bringing all these other things like sub-cortical structures and loops between them and things like atory dynamics that help sculpt activity. That would be the, the update. I, I believe, I hope the Miller and Cone the essential idea is still correct, but now it’s a matter when it needs an update. Cuz that was very much still in the, um, vein of like, you know, rate coating still. And, uh, right. Simply, you know, but’s now we, I I would, I would, I wanna, I’m thinking about an update where we bring in all these up, uh, new ways of thinking about the brain. It’s not in, not, it’s not like, you know, we’re saying that that, uh, that the, the original model was wrong. It’s just a way of extending it and, and updating it. Everything needs to be updated. Science is constantly moving forward, like, you know, and especially something that’s 21 years old now. Yeah. Damn. Will should be an update at this point.  

Paul    01:16:56    And you could cite it again if you do an update to it  

Earl    01:17:00    <laugh>. Yeah, that’s true. I should I start writing <laugh>?  

Paul    01:17:03    Yeah. Um, so, but what, you know, moving forward in your own career, how, how do you view the rest of your, your career playing out? I mean, are you gonna continue, are there things that you are interested in that you have not studied in the past that you wanna shift to despite your, uh, the, the faculty saying you shouldn’t <laugh> for, for example? Or how do you  

Earl    01:17:22    View the rest? We’re all beyond that. We’re, we’re well beyond that. Now,  

Paul    01:17:26    Apologize, by the way. Did they ever apologize? No,  

Earl    01:17:29    No, no, no. I mean, we’re, they can’t, they can’t come with to do anymore. Oh, I know. Only granting agency agencies can do that. Only study sections can do that. Um, uh, sorry, what was the question again?  

Paul    01:17:41    The question is, uh, you know, how, just how do you envision the future of your own career and your interests and how, how you picture yourself moving forward?  

Earl    01:17:50    Well, I don’t mean to be gli, but I’d say more of the same, you know, um, one thing we’re very interested in doing is, since I’m interested in satory dynamics, I want to see if I can, um, change mo modulate, manipulate the atory dynamics and uh, and uh, see if we can get changes in the brain that are correspond with, uh, predictions of our models. So we’re now about to start a series of investigations where we’re doing a closed loop electrical stimulation. Our lab developed a new ultrafast, ultrafast ReadWrite, closed loop stim, closed loop stimulations. When you read the oscillations from the brain, then you match stimulation to those oscillations. And to do that, you need, you need, need a real fast ReadWrite latency. So our lab developed a new closed loop stimulation system that’s just now being installed where we can, um, the ReadWrite latency is below 10 milliseconds.  

Earl    01:18:38    So we can manipulate a lot of different frequencies and a lot of different, uh, from low to low to high. So we’re very, very interested in in, in doing that. Having said that, I just talked about a causal manipulation in the brain where, where we do that. But having said that, we also have to keep in mind that, you know, people, causality is not simple in the brain. People think caus causal stuff is, oh, you just, well, you know, do something and something happens and you have the answer. Now, causality is not that simple in, in the brain. It’s actually much more complex than people think. And causal manipulations are, are another tool. They’re not, they’re not the gold standard. There’s another tool in our, our toolkit and look for, here’s, here’s an example that my, uh, PhD advisor used to talk about Charlie Gross.  

Earl    01:19:17    Like, uh, for example, let’s say I, um, record from the cortex of an animal and the animal’s performing a task and I hit the animal on the toe with a hammer while it’s performing the task. Well, the animal’s gonna stop performing the task and neural activity in cortex is gonna change. Are you saying there’s a causal link between the big tail and the cortex? No, obviously, because speak people, people often, here’s the logic that people often see. You see us over and over again. They’ll, they’ll, they’ll manipulate one area of the brain. A, they’ll record another area brain B, and they’ll study behavior. So they change a activity and B changes and behavior changes. So they go, that must be the causal link, A to B to behavior. Right? But what if instead it’s a, you change a, that changes behavior and behavior goes back and changes the activity.  

Earl    01:20:00    And b, that’s equally plausible. Now, I’m not saying let’s throw at all causal manipulations. I’m saying that we have to think about them in a ju judicious way cuz they’re, they’re important tools, but the, the effects of causal ventilations are not as straightforward as people think. So we just gotta think of ’em as another tool, not as a gold standard. And having said that, if I may go on, uh, people, people often say, well, you know about oscillations, why can’t you do a study there where you causing look at the oscillations and shows has a causal effect on brain function. Yeah, that’s what I was gonna, well sure that’s what we’re trying to do. But all these people are saying this, they’re the people who are still vested in this old rate coating single neurons. And I say to them, okay, can you do a study where you change rate coating in the brain and don’t change anything else and it it changes, um, function?  

Earl    01:20:44    No, you can’t, no one’s ever done that study. That hasn’t happened. The moment you inject current into the brain, you maybe think you’re, you’re using a rate coating model, but the moment you inject current in the brain, you’re changing the LFPs, you’re changing the oscillations, you’re changing electrical, you’re changing everything. So the kind of causal manipulations that the oscillation naysayers are looking for, their own model doesn’t hold up to the, to the, that, that kind of scrutiny. And that’s, that’s where now we get back to Thomas Coon and paradigm shifting, holding the new model up to a higher standard than, than your own model. There are no causal conditions in the brain that prove the rate coating model either.  

Paul    01:21:18    Have your ideas about causality changed over the years since that old rate coating clockwork like, uh, view of the brain? I, I cuz I think I used to have a naive idea about causality and now I’m a wash in thinking that almost everything is causal. Like a constraint is causal context is, you know, it’s like everything seems causal in some way.  

Earl    01:21:37    Well I think it’s a good open-minded attitude to have at this point, cuz we still know we still will very only scratch the surface how the brain works. So right now, any plausible mechanisms out there should be considered to be a possibility, not an epi phenomenon that can be ignored. That’s, that’s, uh, that’s just paradigm defending.  

Paul    01:21:56    All right, Earl, I appreciate all the time you’ve spent with me and, um, good luck with the band moving forward. And of course the science, not that you need luck with either, but, but thanks for talking working memory and we didn’t talk much executive function. We didn’t, um, get into the executive function aspect of working memory, but another time perhaps. So anyway, thanks for your time.  

Earl    01:22:15    Okay, thank you. It’s a pleasure.