Brain Inspired
Brain Inspired
BI 160 Ole Jensen: Rhythms of Cognition

Support the show to get full episodes and join the Discord community.

Check out my free video series about what’s missing in AI and Neuroscience

Ole Jensen is co-director of the Centre for Human Brain Health at University of Birmingham, where he runs his Neuronal Oscillations Group lab. Ole is interested in how the oscillations in our brains affect our cognition by helping to shape the spiking patterns of neurons, and by helping to allocate resources to parts of our brains that are relevant for whatever ongoing behaviors we’re performing in different contexts. People have been studying oscillations for decades, finding that different frequencies of oscillations have been linked to a bunch of different cognitive functions. Some of what we discuss today is Ole’s work on alpha oscillations, which are around 10 hertz, so 10 oscillations per second. The overarching story is that alpha oscillations are thought to inhibit or disrupt processing in brain areas that aren’t needed during a given behavior. And therefore by disrupting everything that’s not needed, resources are allocated to the brain areas that are needed. We discuss his work in the vein on attention – you may remember the episode with Carolyn Dicey-Jennings, and her ideas about how findings like Ole’s are evidence we all have selves. We also talk about the role of alpha rhythms for working memory, for moving our eyes, and for previewing what we’re about to look at before we move our eyes, and more broadly we discuss the role of oscillations in cognition in general, and of course what this might mean for developing better artificial intelligence.

0:00 – Intro
2:58 – Oscillations import over the years
5:51 – Oscillations big picture
17:62 – Oscillations vs. traveling waves
22:00 – Oscillations and algorithms
28:53 – Alpha oscillations and working memory
44:46 – Alpha as the controller
48:55 – Frequency tagging
52:49 – Timing of attention
57:41 – Pipelining neural processing
1:03:38 – Previewing during reading
1:15:50 – Previewing, prediction, and large language models
1:24:27 – Dyslexia


Ole    00:00:04    So I think what we need to do is to ask what do oscillations do for neuro computation? The way to think about it is that if you have a lot of nuances firing mm-hmm. <affirmative> like crazy, and look at the population activity, you don’t see any modulation. But if you, you now start to kill off ones every hundred milliseconds, you start to see arr rhythm emerge. Some years ago, I I I, I started to get big dots.  

Paul    00:00:35    Oof. Okay.  

Ole    00:00:36    <laugh>. Yes. And that is because  

Speaker 4    00:00:43    This is brain inspired.  

Paul    00:00:45    Hello everyone. I’m Paul Ola Jensen is co-director of the Center for Human Brain Health at University of Birmingham, where he runs his neuronal oscillations group lab. Ola is interested in how the oscillations in our brains affect our cognition by helping to shape the spiking patterns of neurons, and by helping to allocate resources to parts of our brains that are relevant for whatever ongoing behaviors that we’re performing in different contexts. People who have been studying oscillations for decades and finding that, um, different frequencies of oscillations have been linked to a bunch of different cognitive functions. Some of what we discussed today is olas work, uh, on alpha oscillations, which are around 10 hertz. Uh, so 10 oscillations per second. And the overarching story is that alpha oscillations are thought to inhibit or disrupt processing in brain areas that aren’t needed during a given behavior. And therefore, by disrupting everything that’s not needed, resources are allocated to the brain areas that are needed.  

Paul    00:01:52    So we discuss his work on attention in that vein. You may remember the episode with Carolyn dicey Ginnings and her ideas about how findings like olas are evidence that we all have selves <laugh>. We also talk about the role of alpha rhythms for working memory, for moving our eyes, and for previewing what we’re about to look at before we move our eyes. Um, and more broadly, we discuss the role of oscillations in cognition in general, and of course, what this might mean for developing better artificial intelligence. Show notes are at brain 60. For those of you interested, I have reopened my online course, neuro Ai, the quest to explain intelligence, which you can learn more about through this short video series on the Um, where you can also learn how to support the show through Patreon to get full episodes, uh, and to join our Discord community, or just to make a contribution, um, by other means if you value this show. All right. Thank you for being here. Here’s Ola. Uh, so Ola, you, you have studied oscillations, uh, in the brain for a long time now, and I, I feel like that phenomena oscillations, uh, kind of waxes and wanes in popularity, um, in the neurosciences writ large. And I’m just curious, uh, I thought I’d start up by asking you what your view on that is or what your experience has been. You know, has funding been harder in, in certain years and have people’s interests wax and, and wan the way it seems like from the outside?  

Ole    00:03:33    Yeah, no, it’s a, it is a bit difficult to say because, um, I’ve always surrounded myself with people who are interested in oscillations. Sure. So, yeah, I, I, I started out, um, looking at the hippocampus, looking into hippo camel data, but also doing modeling on the hippocampus. And there you see these very strong feed oscillations. So I think in that part of the community, there’s not really a doubt that oscillations are there loud and clear. Um, so, so then I think there’s also regional differences. Um, so, so, uh, I worked in the US did my PhD and their people somehow seems a bit more skeptical about causations Hmm. Than in Europe. Um, less so. Um, I don’t know exactly what makes that difference. Uh, so, so sure it has sort of vaxxed and vain over time, the interest in oscillations, but there’s also seem to be some interesting regional differences.  

Paul    00:04:28    I hadn’t thought about that. Wh where are we right now? Are we at a, a peak of interest in oscillations? Are we at a trough or, or what <laugh>?  

Ole    00:04:37    Um, I, I think, I think, um, there, there’s a lot of excitement, oscillations sort of is, is sort of being rekindled. Um, and I think also, um, oscillations are many things and of course occurring in different frequency ranges. Yeah. Um, I, I, I think so. So, so there’s also sort of when, when in interest VA and veins, it, it also depends on what frequency band you are looking at <laugh>. So, so, so back in the, in the nineties, uh, when people talk about the binding theory that these gamma oscillations say from 40 to hundred hertz were thought would be important for binding, the focus of was very much on, on gamma oscillations. And I, I think now people are getting more and more interested in, in, in, in a slower type of oscillations. So, so theater oscillations with five to eight hertz oscillations from eight to 12 hertz and, and beat oscillations. So maybe there’s sort of a sl shift down in, in, in frequency in terms of interest  

Paul    00:05:39    <laugh>. Yeah, I guess it, uh, the scale waxes and wanes too. You’re, you’re an alpha guy, right? Or an alpha male, I suppose I could say. And <laugh>. So, we’ll, we’ll end up talking a lot about, uh, alpha, and I know that’s not all you study, but, um, yeah, just staying with the big picture for a moment. So spikes are easy, right? Because they’re pulses and you can count them. And oscillations are more slippery, I’d say like, like neuromodulation, right? Because there’s different analog like levels, and then you have to pull out the, the power within some range, and you have to choose that range. And, um, so this leads me to the question of just what, what your kind of overall worldview is regarding the, you know, the causal role of oscillations. The, are they, are they more constraining how they’re, you know, how they interact with spikes and how spikes interact with them? Are they, are they mutually causal and then throw neuromodulation in there? Like how it’s just harder to think ab for me, it’s harder to think about this super dynamic process of oscillations and how it, what role it plays. Like what, how do, how do like, really, um, have a clear picture in my mind about how it’s interacting with the spikes and what it all means?  

Ole    00:06:58    Yeah. So it is a very good question. So, so, um, I mean, first of all, you think oscillations are group phenomenon, right? So, so they are a consequence of the neuro spiking in a synchronized manner, but also in a rhythmic manner, right? So the, the, the oscillations follow from that. Yet I will also argue that the spiking of individual neurons is driven by the group activity, the oscillations, right? So the example would be that if you have an audience and after a concert, they might start applauding, and then when you start clapping, suddenly you hear the start to, to, to clap in a rhythmic manner. Yeah. So when thinking about that, each individual is driven by the population activity, right? So it’s really the oscillations, the group activity exercising a caus effect on indivi in individual, but it’s all, uh, individuals together that creates the oscillations.  

Ole    00:07:56    So, so, so from that perspective, it’s difficult to make a clear causal direction unless you think about the individual nuance at a time. Hmm. Um, but I, I can also tell a little story, which was very lightening for me when, when thinking about oscillations, because I started out by lumin, eg and G signals, and obviously the oscillations loud and clear. Um, but of course we want to link them back to, to, to the spiking activity. So I had this, uh, PhD student, Saskia Hankins. She was studying the, the tomato sensory system. Um, she started to discuss with Ronald Fumo in Mexico who was recording from non-human primates in the sum sensor system. And they had, they had all these wonderful spike data and, and were investigating different aspects of that. Um, they also had the local field potential data, right? Uh, but they have never really looked at them.  

Ole    00:08:54    So what Saskia did was that she flew to Mexico, set up the collaborations, and started to analyze the data. And what we saw was that when spikes the neurons were spiking, the spiking was tightly locked to 10 hertz l oscillations mm-hmm. <affirmative>. But if you looked at the individual spikes, you did not see much of an oscillation. Uh, the firing rate was not very high. So, so, so, so you didn’t get this oscillatory pattern. But if you look at the power spectrum of the local fuel potentials, a clear, uh, peak at around 10 hertz. And also when we looked at how the spikes linked to the face of these oscillations in a local field potential, we saw that the spikes were very clocked by, by these oscillations.  

Paul    00:09:41    But, but individual neurons weren’t necessarily clocked, but among the population of neurons, they were clocked. Is that what you’re saying?  

Ole    00:09:48    Uh, but rather when a neuron fired, it fired at a particular phase of these oscillations, but the firing rate was not that high.  

Paul    00:09:57    Oh. So yeah. Yeah.  

Ole    00:09:58    So, so you wouldn’t see the, a rhythmic pattern alone because it would be skipping cycles.  

Paul    00:10:03    Yeah. Okay. Okay. So it wouldn’t fire on every cycle. So what the spiking wasn’t rhythmic. Yeah. I mean, that’s the idea is like when the a an oscillation is happening and, um, there’s this peak, right? And it’s the, at that peak that it’s maximally allowing, uh, or, or kind of nudging spikes toward firing or allowing them to fire or, or not repressing them or something, uh, or depressing them. Um, but, but this interaction then, right between an oscillation and spikes, you can’t, you can’t mess with an oscillation without messing with the spikes, and you can’t mess with spikes without messing with the oscillation. So I just want like a, a clear picture in my own head about how to think about that relationship causally and what, what it means for our cognition and the way and brain function, right? Like, which is what should I think is more important for, for instance, right?  

Ole    00:10:54    Yeah. I mean it, so, so the causal, uh, discussion becomes very difficult, right? Because it is indeed possible to entrain these oscillations, for instance, by rhythm flick, uh, in, in a 10 hertz span, you can entrain health oscillations, you can also do transcranial stimulation with current also at 10 oscillation and intrain, these oscillations. And then you can show that they have effect on cognition. We can show that these oscillation are inhibitory, however, then one can always argue back and say, Hey, these are not the same kind of oscillations as occurring naturally. Mm-hmm. Right? So that makes the, the, the, the causal discussion quite difficult. But is our starting point is that these oscillations are there loud and clear, right? The oscillations are the strongest signal in the ongoing eg or mg also doing task. So to me, at least, that warrants that we should try to understand what they’re doing. And it also begs the question on how the neurons they are working together in order to produce the oscillations. Hmm. Um, and in my mind, the fact that the spiking of the in individual cells are timed by the face of these oscillations also mean that the oscillations would have an impact on how these neurons, they compute.  

Paul    00:12:13    The word constraint, uh, keeps rolling around in my mind. Do you think of them more as constraints or more as co I mean, constraint and causality are kind of intertwined as well. Constraint could be seen as a kind of causality. Um, I’m trying to get a sense of how powerful you, you see oscillations in terms of responsibility for brain function. <laugh>.  

Ole    00:12:36    Yeah. I like to think that they’re part of organizing the neuronal coding. So, so to give another example, um, so John and Keefe, um, he got the noble prize for finding place cells. Mm-hmm. <affirmative>, uh, name his cells in the rat hippo canvas that fires when the rat is in a specific location. So he also did another very important discovery, and that is this notion of face possession, of face coating. So when you look at the firing of the place cells and relate them through this case ongoing feed oscillations, and these are these oscillations being five to 10 hertz, and they are super strong in the hippocampus. So you, you see them with the naked eye, you cannot miss them. But what they then did was to relate the firing of the place cells to the, to the face of these, the oscillations. And as the rat moves through, um, a play cell, you see the firing happening earlier and earlier in the theater cycle.  

Ole    00:13:36    So that, that now means that it’s possible to take a collection of place else and then reconstruct where the rat is. What you now also can do is that you can take the face of firing into account, and then you can better predict where the rat is, right? So that, that means when you talk about the installation constraining the firing, um, we can also think about the phase of in which the cycles they fire coding for information. Um, so, so of course, I cannot prove to you that the brain is utilizing this phase organized code, but at least when we do get that the data themselves, we can do a better job at sort of reading the rat’s or mind of where the rat is by taking the face of firing into account.  

Paul    00:14:23    I, if I use the word, uh, oscillations are suggestive to neurons, is that too weak?  

Ole    00:14:32    Um,  

Paul    00:14:33    Because, you know, neuron neuronal firing is highly variable itself. And then of course, yeah, with, with oscillations, which I, I’m gonna ask you in a second. I mean, we, we have these terms that are supposed to like denote clear bands of oscillations, but really it’s happening. There’s lits bits of oscillations throughout all of the frequencies, right? And so that’s kind of noisy and not, you know, it’s not as clear as alpha beta delta, et cetera. Um, but is, is the word suggestive, uh, of wind to fire? Is that too light? Does that, does that not give enough credit to an oscillation? Well,  

Ole    00:15:06    I, yeah. What one say the, the changes the probability for when a, a, a neuron fire are not. Um, but I, I would also say personally, I’m, I’m quite sort of signaled to noise driven. Mm-hmm. Um, I study alpha because we see it. And, um, also back to that, the theater and the alpha oscillations, um, in respectively rats and, and, and, and humans, they, they are so strong, right? So mm-hmm. <affirmative> in my mind. And it, it is very hard to come up with a cognitive task where you do not see robust modulations, uh, in Alpha Band. Uh, so to me, that also speaks to the notion that these oscillations must be important for wind neurons. They fire or not.  

Paul    00:15:53    Yeah. But one of the things I always found frustrating about, um, thinking about oscillations is, you know, it, it, it seems like you could take any frequency band and relate it to any cognitive function that you want. Um, it seems like, so it’s hard to, you know, when you think, um, right now in my conversation with you, I’m thinking alpha, that is, uh, attention and working memory and, you know, and then like you were saying, gamma, oh, that’s the binding frequency. But then there are a thousand different stories, um, about how a thousand different frequencies relate to a thousand different cognitive functions. I mean, do, do you think that we’re gonna get to a point where we can kind of keep in mind, and, you know, when I think alpha, I should think X, right? And when I think beta I should think X? Or is it just that because the brain is so complicated and complex that we’re just gonna have a really long table of different cognitive functions that they interact with and how they interact with them?  

Ole    00:16:50    Yeah, but again, I will, I’ll go back to the, the sort of signal to noise argument, and then say, if you, for instance, do a working memory task, you see a very robust modulation in YB band, not so much in a beta and gamma band, right? Mm-hmm. <affirmative>. So, so that sort of constraints the, the, the issue and the interpretation. So, so of course your, your search space or data space becomes larger when you consider frequencies, but, but it’s not, it’s not like we don’t see these modulation robustly in all frequency bands. They, they often seem to be constrained to very specific bands. Hmm.  

Paul    00:17:28    Um, a couple more broad questions before we actually start talking about, uh, the interesting work that you do. Uh, are they oscillations or are they traveling waves, or both?  

Ole    00:17:39    Yeah, very good question. So, uh, when we look at intracranial recordings, for instance, with Eco, that’s where you sort of put a grid of electrodes on the neo cortex  

Paul    00:17:51    On the brain. Yeah.  

Ole    00:17:52    There, yeah. There, there, there you see very robust traveling waves. Um, if you look at hippocampal recordings, there’s also reports on the theater isolation, traveling down the hippocampus. And I, I think that’s, that’s, um, I, I, I think, um, it is, it, it’s, it’s, it’s, I I wouldn’t say obvious, but it’s like you have all this excitable tissue, and then if there’s a generator, generator oscillating in one part of that tissue, it will always be strange if you didn’t have waves sort of emerging out from that. Mm-hmm. <affirmative>, the thing is, we, we have not been very good at quantifying these waves in the context of, of cognitive tasks, right. But I, I strongly believe they’re there, and I believe they’re important, but it’s first, now we sort of getting the technology with the multi electrodes in animals to record this and, and, and to observe the waves at the scalp level at, at humans in ed and immediate data is, is, is, is a bit problematic because we have the volume conductions and we sort of, everything sort of blurs together, right?  

Paul    00:18:55    Yeah. Has your faith in oscillations as a major function, uh, this is a totally unfair question I know. Has, has it ever w waxed and waned? Has it, have you ever faltered in thinking because you, you know, you’re really, uh, focused on oscillations?  

Ole    00:19:16    Yeah. So, so, so at least some years ago, I, I, I, I started to get big dots.  

Paul    00:19:22    Oof. Okay. <laugh>.  

Ole    00:19:23    Yes. And that is because, um, we are starting to look at more natural tasks where we allow participants to cade and then, um, we see the pils three to four times per second. Mm-hmm.  

Paul    00:19:37    <affirmative>,  

Ole    00:19:38    If you now have an oscillations being a 10 hertz, and sort of think about the o oscillations doing, um, clocking of the processing, then it’s sort of art that you have a relatively low oscillations, 10 hertz sort of meaning hundred milliseconds per period, and then you circuit three to four times per, per second. Right. It, it doesn’t seem like a very intelligent design to, to, to use a bad expression. But, but, um, what’s then safe, the whole story is that you now find that often your circuits are locked to the face of these L oscillations, and then things start to make sense again, because then you can move your eye, um, in a period where the l o oscillations are not processing, so to speak. And then when you eye lens, you have the visual information coming in on the excitatory phase of the L oscillations.  

Paul    00:20:33    So the, the eye movements are kind of nested within the alpha.  

Ole    00:20:37    Exactly, yeah. And, and what made me nervous before observing that was that, that we started these oscillations, and they’re actually quite slow in frequency compared to how fast we perform or our visual system operates. Mm-hmm.  

Paul    00:20:57    Um, so, so your doubts were because of the timing of the oscillations relative to the spikes? Yes. But you never, you never thought, uh, like some people claim oscillations are epiphenomenal, you never, uh, had those kinds of doubts?  

Ole    00:21:15    Not really. Um, I have to admit, uh, because we see them so loud and clear when recording eg and e g data. And, and again, it’s very hard to come up with a task where you don’t see robust modulation. Um, so at least to me, that warrants that at least some of us should, um, be investigating them, then it’s all well and fine. That, that some are skeptical and say they’re able phenomena. But, and then with time, we, we will sort of resolve the issue  

Paul    00:21:45    Is that are, are those, uh, us uh, scientists who say it, they’re epiphenomenal and, and the Europeans say no, they’re, they’re not epiphenomenal. Is that the, is that that divide, right? I i,  

Ole    00:21:54    I, I wouldn’t say there’s an Atlantic divide like this, but, but I mean, there’s more sort of transferring.  

Paul    00:22:00    Oh, okay. Okay. Um, so one more question. I’m, I’m gonna ask you a little bit about, um, the relation of some of your work to artificial intelligence later, but, uh, artificial intelligence and more and more, you know, the cognitive sciences and, um, think of brain functioning in terms of algorithms and objective functions. And, you know, these sort of, um, very computable straightforward, um, uh, sequences of computational steps. Do oscillations, uh, throw a wrench in that are, are oscillations, uh, amenable to algorithms? Or can, you know, can we integrate them into the story about an algorithm? Or do they complicate that story in a way that maybe can’t be saved?  

Ole    00:22:42    Um, so, so, so there’s two strengths to that question. So, so first, I very much agree with the sort of algorithmic view. So I think what we need to do is to ask what do oscillations do for neuro computation, right? So, so we shouldn’t just sort of correlate them with whatever function we should really go in and ask is some, uh, processes, computations that can be done better with oscillations, that without oscillations. And, and what we are thinking now is that the, uh, 10 hertz, the oscillations in humans, and as well as the hibu, five to 10 hertz oscillations, uh, the oscillations in the rat, what they are there inhibitory in nature. So the pulses of inhibition mm-hmm. <affirmative>. So what you should imagine is that the peak of that inhibition, all input are being blocked, but as the inhibition ramps down the most excitable and in the second most, and in the third most excitable representation with fire.  

Ole    00:23:52    So what these oscillations then give you is almost like a filter that then allows only the most excitable representations to slip through. But furthermore, they are organized as sequences according to importance. So if you go back and think about O’Keefe and this theater face coding, a theater phase possession, one con construct models where you have these sequences being read out. Uh, so the, this spa spatial representations organized according to umci excitability or sort of how far they are from the rat’s upcoming positions. Um, furthermore for the visual system, what we also speculate is that when they’re competing visual input, only the most excitable input should go through the visual system. Furthermore, they need to be organized in a called according to importance if you are. Mm-hmm. And this is what the oscillations they provide, they inhibit. And when you ramp down in a patient, you get the activation according to excitability. So that’s, that’s sort of the algorithm, uh, you think that these oscillations are doing.  

Paul    00:25:03    But this is very much a temporal algorithm. And you know, if you, if you take like a deep neural network, uh, you don’t care about time. I mean, do you, I’ll just jump to it then. I mean, are we gonna, are we gonna be able to build, quote unquote strong AI or human-like AI without this kind of, uh, temporal aspect built in, whether it’s built in via oscillations or not? And yeah. Do you think we should just, just build in oscillations or, or that kind of timing mechanism?  

Ole    00:25:32    Yes. So, so, so, um, we are right now playing around with, with different deep neural networks trying to do exactly that. Oh, okay. Um, so, so yeah. So I mean, you are, you, are, you, you’re completely correct. When one considered deep neural networks or convolutional neural networks, they do not build in time. Right? Um, and, and if we then want to relate what’s going on in these networks through the timber dynamics as measured in humans and non-human primates, we cannot do that. So we have some ideas for how to augment deep neural networks to incorporate time. So we call it, uh, dynamical deep neural networks. And what we also want to do is to, uh, put in these inhibitory oscillations in the network. So the way you should then imagine is, imagine this is that we present the network with multiple stimuli. Uh, so it could be a cat and a dog, what presented at the same time, but what the network then does is to break that image up into a temporal coat. So, so you have a cat, dog, cat, dog coming out in your end, and it’s these inhibitory properties of the oscillations that would serve to convert the parallel input into a tempo coat.  

Paul    00:26:51    How do you add the oscillations in? Are they, are they just a, a function of the firing of the neurons and they emerge from that and then, and then affect the neurons themselves?  

Ole    00:27:00    Yeah. So step one is that we simply impose them. Yeah. So, so we assume that there’s a pacemaker driving net. So, so I guess what you’re getting at, do they emerge from the, from the network itself? You put them, and of course that would be, yeah, it would be very elegant if they then emerge from, from, from, from, from the network, right. But that would then be the next step Yeah. To, to, yeah.  

Paul    00:27:22    It’s so complicated and messy. I mean, do you think, obviously, you know, if you have robots, they need to operate in time. Um, if we’re, we go be beyond like, deep neural networks, um, and, and if you want human-like a, uh, artificial intelligence and to be interacting with humans, timing needs to be pretty e e exquisite as well. And, you know, robots moving throughout the, uh, world. So, you know, in, in the future with, uh, really great AI robots, are we going to see oscillations as part of that story?  

Ole    00:27:53    I don’t know. No. So, because <laugh> Yeah, yeah. Uh, yeah. So, so because I mean, so far what we are doing is that we are taking deep neural network, and I, I think it’s amazing what they can do and how the representations emerging in these deep, deep neo networks can be mapped on to, to human or non-human pri brain activity. Yeah. Uh, so, so, so, so I think something is, is, is, is correct about those for how, um, representations are being formed. So, so now we are sort of exploiting network to make a model of the human virtual stream. So, um, one could then imagine that also when doing that process, we get inspired for how to improve the networks and, and in a machine learning sense. And then there would be an application that could be useful for, for robots. Hmm. Um, but so far I have no idea wha for how to go the other way. Right?  

Paul    00:28:53    Yeah. Yeah. It’s interesting. All right. So, um, maybe we should talk a little bit about Alpha. Your, is, is Alpha your favorite, uh, frequency band?  

Ole    00:29:04    Uh, not by choice, but, but, uh, that’s somehow it, the, the data drove us. Yeah.  

Paul    00:29:10    So, um, maybe you could just take us through, uh, you know, there are different frequency bands, many of which you’ve already mentioned. And I guess Alpha, the Alpha Band, which is, uh, around 10 Hertz was the first to be at least, um, uh, noted right. By Hans Berger, and it used to be called, what was it, the Burger Burger Band?  

Ole    00:29:32    Yeah. At the Burger Rhythm.  

Paul    00:29:34    Burger Rhythm, that’s right. Yeah. Um, and it’s, you know, classically, or, or, well, you, you should correct me, but historically, um, it’s been associated with things like, um, drowsiness, well, uh, uh, internal thinking, internal cog, cogitation, um, and sleep and things like that. And how has that, how has the story of alpha changed over the years?  

Ole    00:29:56    Yeah, so, so, um, so I mean, it goes back to 1924, when, when, when this hesberg, he identified the, the alpha written as we call it now, right? Um, and then, um, there were quite a few investigations, uh, but most of them also concluded that alphas associated with rest, right? So if you close your eyes, alpha becomes strong. If you, if you become drowsy, ever become strong. So, so, so possibly oscillations have also gotten this connotation of being somewhat boring, right? <laugh> and, uh, also, um, electro electrophysiologist working with, with, with, uh, non-human primates were not particularly interested in, in, in alpha, right? Because why would you study something that just pops up when, when nothing is going on? Um, but, but then what happened was that, um, sort of in the late nineties, um, people look at the alpha and working memory tasks, and there’s a paper by Wolfgang Cle where he report that alpha is actually relatively strong drawing, um, working memory retention.  

Ole    00:31:04    So it referred that to that as paradoxical alpha. Okay. Um, so yeah, so, so, so we were also doing some studies and that what’s brought me into the field where we looked at the algorithm wood working memory load, and what we saw was that the alpha ulcerations were becoming stronger and stronger the more you kept in working memory, right? So the more difficult the task, the stronger the algorithm, right? And, um, we thought we got the triggers wrong and so forth, but we checked it in all kind of ways, and it, it, it held up and it turns out that it has now been reproduced many times. So that brought us to think that maybe alpha is not about rest, right? So also in line with what Cleage had proposed, um, we, we, we put forward the notion that the a oscillations were inhibitory. Uh, so during working memory, that would mean that, um, when you sit there and have to retain something in working memory, in order not to have interference, the visual system is shut down by the algorithm. And that was later tested by doing different kind of task. We introduced the structures that people could anticipate, and we could show that the oscillations pop up before, uh, these, the structures are occurring in order to suppress the distraction.  

Paul    00:32:29    So, so the idea is like the, the alpha is, um, okay, there’s a few ideas here, right? So one, the first thing that you mentioned is, you know, the stronger the alpha, the, the more, the more you can hold in working memory, um, and yeah. And that, that is, um, and please correct me, that’s because if you have like a really strong peak, more neurons can fire, and the neurons firing is an analog of things that you can hold in working memory.  

Ole    00:32:56    Um, that was one hypothesis we entertained. And the other hypothesis was that, um, the ations were, in this case generated in visual cortex. So they were shutting down visual cortex in order to allow, say, more frontal regions to maintain the working memory representations.  

Paul    00:33:18    So it’s not a, a, I thought the story was that it was a top-down control mechanism where frontal areas, uh, produce this alpha and that alpha is, you know, travels to, or is, um, manifested in visual cortex. And that is a suppressive oscillation in visual cortex so that you’re not getting in incoming stimuli. So you can remember the phone number over and over, right? Cuz I don’t wanna be distracted by infinite stimuli.  

Ole    00:33:46    Yeah, yeah, no, no. That, that, that I would agree  

Paul    00:33:49    On. But then you just said that v that the early visual cortex was producing the alpha,  

Ole    00:33:55    Um, yeah, but, um, the alpha has, um, and inhibitory effect, if you will. So, so, so the way to think about it is that if you have a lot of ones firing mm-hmm. <affirmative> like crazy, and look at the population activity, you don’t see any modulation. But if you, you now start to kill off ones every hundred milliseconds, you start to see arrhythmia emerge, right? Mm-hmm. <affirmative>. So, so, um, of course if you kill off all the neurons, you don’t see anything again, right? But there’s sort of an inverted U shape. So the way we think about alpha being generated is actually by having nuance not firing every a hundred milliseconds. So that also explains why the algorithm increases in power as the neuron firing goes down.  

Paul    00:34:54    So, so the, okay, so this, this gets back to that, that interaction, right? Because I, I thought that the alpha was in training them in this rhythmic way, um, and by doing so, sort of disrupts the incoming stim stim, um, stimula stimulation, sensory stimulation. But then you’re saying that no, it’s, it’s the neurons themselves that begin to fire, uh, rhythmically, which allows the alpha to have more of an effect.  

Ole    00:35:22    Well, there’s a difference between sort of the alpha being generated in the network, and then what you measure, what you measure is most likely the sum activity of per neurons. And, um, that is a consequence of, um, oscillatory activity coming in and then silencing, uh, the peral nuance in, in a basic manner. So then you are left with sort of like collective bursts, uh, occurring every hundred milliseconds, but the stronger that inhibition the, the, the, the shorter the duty cycle mm-hmm. <affirmative>, uh, for, for, for the neurons to fire. Um, and then that would emerge as a stronger peak in your power spectrum, albeit the total neural firing is going down.  

Paul    00:36:18    And, and that is the signature of, um, keeping at bay sensory stimulation.  

Ole    00:36:27    Yes.  

Paul    00:36:28    So it’s, so then you have, um, so, so the old story of, um, sequences of spikes firing at different phases along the peak of the alpha wave, um, signifying however many things you’re keeping in, uh, working memory, that story, what’s the, um, status of that, of that story? <laugh>?  

Ole    00:36:47    Yeah. Yeah. That, so, so that, that, that then that does become complicated <laugh>, uh, because I mean, that was a story initially developed for, um, theater installations mm-hmm. <affirmative>. Um, and, and, um, there is some support from that, from intracranial data now where we have been looking at ec ECOG data, and then in this case we see eight hertz oscillations, um, in, in which we have multiple working memory representations activating on that cycle. Hmm. Um, but, but I have to say it, it, it is not a very clean picture, right? Because I mean, eight hertz is just in the boundary between feed and air oscillations.  

Paul    00:37:28    Yeah. Well, that’s the thing is it’s like, yeah, we, we speak as if there, there, there’s this clear line in, you know, in theta and alpha. Yeah, yeah. But it’s, you know, two, two hertz or something. So, you know, I was gonna ask you, and we’ll come back to this later about individual differences in, uh, in reading and how that, um, and your, your work on, um, studying oscillations and reading and predicting. But I, you know, I’m curious about individual differences in, you know, something like working memory, right? So everyone has different capacity for working memory, although we’re all kind of the same, and it just seems strange. It would seem strange to me that everyone is operating in the same regime that it has to be at eight hertz. It seems like, um, Jessica might operate at 8.5, or that might be her best oscillatory, uh, regime, regime for, uh, you know, working memory and, and, um, Philip might do better at, uh, 10 or something. Do, are those individual differences clear? Or, and if so, like what’s the range or are our brains just so exquisitely, uh, evolved that we’re all operating at, like within these narrow bands?  

Ole    00:38:32    Yeah, so, so I think if, if, if you go to a particular individual and measure them repeatedly, you will find that their frequency is, is quite stable, Uhhuh <affirmative>. However, there’s individual differences, um, as, as, as you refer to, and some people have been able to, to show that these individual differ differences in alpha frequency is, is, is important for how people sort of com combine, um, uh, uh, things in time. So there’s these paradigms where you can integrate or separate, um, two objects depending on how close they’re presented in time, and then people with slower oscillations that they, they, they tend to, uh, integrate more than people with fast oscillations. Mm-hmm. So there’s these individual differences.  

Paul    00:39:20    Is it better to have a big brain or a small brain <laugh> human? Yeah.  

Ole    00:39:25    Yeah. I, I, I don’t know. Because I mean, there’s also the, the, some people are also also trying to find link between the size of the brain and the oscillations mm-hmm. <affirmative>, um, I’m, I’m not up to date on what the findings are, but, but there’s these ideas around.  

Paul    00:39:40    So I mean, just on a personal note, some, sometimes I wonder, and this is maybe this an oscillation story with me sometimes I think that I think very quickly, but unfortunately not very deeply. Uh, and so I can think on the fly very quickly, but then I, I feel like I’m, uh, lacking depth in my thought, whereas some people that I’ve observed who seem to process things more slowly, uh, can also seem to process them more deeply. That’s a really naive story. I just worry that I’m a very shallow, quick thinker. That’s the, that’s the issue.  

Ole    00:40:11    <laugh>. Ah, yeah. <laugh>. So, so I don’t know. I listen to your podcast, so, so I don’t, I don’t, I don’t share that concern,  

Paul    00:40:19    <laugh>. Okay. Oh, okay. Leave it, leave it at that, so I can continue to worry about that then. Well, so I mean, we’ve, we’ve touched on working memory and, um, let’s move on to attention, because again, so the story and working memory is that Alpha has this suppressive role, uh, in suppressing, um, dis distractor information coming in. And, and I’ll link to all these studies, you know, that, that you, uh, work on and, and your lab website for people to learn more about the gritty details and such. Um, I had Carolyn dicey Jennings, um, on the podcast a few episodes ago. I can’t keep track now. Uh, and she’s a philosopher and she’s, you know, written about using the evidence of Alpha oscillations as a, as a, an, um, an inhibitor, uh, as evidence for, um, that atten in its role in attention specifically as evidence for a self that we have a, uh, self. Uh, and that’s a very loose way to describe, uh, her detailed work. I have no idea if you’re familiar with, uh, any of her work, but, or if you have a comment on, on the relation between, um, alpha and attention and the self. Um, but we, but then you’ve done a lot of this work, you know, showing that Alpha has this, um, inhibitory role and attention as well.  

Ole    00:41:34    Yeah. So, so I mean, so, so I guess I’m more comfortable about talking about, uh, attention and alpha, um, and inhibition, and then maybe we can return to the self, um,  

Paul    00:41:45    Sure.  

Ole    00:41:46    <laugh>. Yeah, maybe, maybe we can, yeah. Yes, yes. Um, so, so I mean, so, so the, the, the studies for alpha and attention are, are somewhat simple in the sense that we simply ask people to fixate and then attend to something, to something to the right, and then, um, ignore what, what, what’s to the left. And what we then see is that alpha control lateral to what you should attend to is the pressed, but in the other hemisphere associated with the distraction, the alpha increase. So that has sort of become a workhorse paradigm for the alpha oscillations. Uh, so I should also say here sin, but that, that does not mean that each time you see alpha modulations, it has substance do with oscillations. So we like to think that alpha is always exercising this inhibitory role, but attention is sort of the workhorse for, for, for getting that out.  

Ole    00:42:43    Um, so what we would like to do is to generalize this to, to, to other brain regions, and maybe that’s where the, the larger discussions come in. So, um, now it turns out that if you look at intracranial recordings, um, uh, uh, you also see alpha ulcerations in other regions than visual regions or some regions. If you do language tasks, you see alpha modulations in the language areas like left prefrontal cortex. So what I would like to think is that alpha is ubiquitous. Hmm. And what it actually does is to allocate resources in the brain. So, so I mean, maybe to, to, um, say something which is wrong, but, um, there’s a saying that we only use 10% of our brain, uhhuh, <affirmative>, but we can modify this to say that we only use 10% of our brain at a time.  

Paul    00:43:37    Okay.  

Ole    00:43:38    <laugh>. Yeah. So, but it’s still wrong. Is  

Paul    00:43:40    It’s still wrong though.  

Ole    00:43:41    It is still wrong, it’s still wrong. The 10% is wrong. Yeah. But, but, uh, maybe it’s within rings. The, the, the, the point being that if all brain regions were active at the same time, that would be sort of information overload. Uh, your brain could not function, right? So, um, what we like to think is that alpha in general is inhibiting all the brain regions not involved in a given task, and then the alpha activity is decreasing in the regions, um, doing the, the processing itself. And this is not only occurring in the visual system, but it’s more general, uh, within the full brain. So this is speculation, but, but it’s sort of consistent with, with the, the different observations we are making.  

Paul    00:44:28    So it’s resource allocation by saying, all right, kind of, um, uh, just always telling everyone to stop except for the relevant, um, networks that need to be, uh, highlighted or, um, taken advantage of at the time.  

Ole    00:44:44    Yeah. Yeah.  

Paul    00:44:46    I mean, this begs the question, and I’m sorry to just jump to this, but how, so you used the word attention, right? And it’s the, if we think of attention as a cognitive, uh, function, like a mental cognitive function, I mean, then where does that come from? If, if alpha is, uh, indicative of this process of attention? In other words, how, how does the controller, let’s say it’s alpha know how to control?  

Ole    00:45:12    Yeah. So indeed there’s been a lot of debate about this, um, and what we initially thought seems wrong or not always correct. So we thought that alpha was under top down control. Hmm. So if you have to attend to something to the right, and as a distraction to the left, we thought that alpha would increase, or your right hemisphere to um, uh, uh, sort of push out the distraction. So it has been difficult to find evidence for in general, there’s, there’s some indications, but in general, what we actually see is that it is how much you are tending to the target to the right that then results in the alpha increasing, uh, in the non-target hemisphere. So to me, that suggests that some regions are being engaged and then through some sort of lateral inhibitory mechanism, alpha is increasing in the non-engaged regions. Um, so this is also consistent with, with the perceptual load theory of, of ni lavi who has made this point that it’s actually very hard to, to, um, suppress distraction. It’s a bit like, don’t think about the polar bear walking in the door to the right, right? Uhhuh <affirmative>. Um, but, but what you can do is to, to focus your attention to the left. And through that, the alpha activity associated with the door to the right is increasing.  

Paul    00:46:44    So I, I’m gonna have you do a task, right? So I’m gonna give you instructions, and that’s an attention task, and I’m gonna say, you’re gonna be queued about what to which side to attend to or something, right? So that’s externally, I’m, you’re getting my instructions. Um, so then you think, okay, there’s a top down thing, I need to do this, and then while you’re doing the task, then you’re getting this sensory, uh, stimulation of like where the queue’s gonna be. So that is, uh, bottom up driven from our senses. Um, and I, you know, the question is who’s doing the, the doing, right? How does that then turn into the, the controller, the, um, resource allocation, the, the alpha that’s helping us to attend? It’s an impossible question right now, isn’t it?  

Ole    00:47:24    Yeah. So, so I mean, so, so I mean, so the cop out here is to say, is prefrontal cortex, right? There you go. But then it almost becomes like a humongous notion, right? Yes, yes. So I mean, I mean, so, so at least you would like to think about prefrontal cortex sort of, um, doing a top-down drive to, um, engage the regions that are involved in what you attend to. And then there’s a secondary mechanism in which alpha increases in the regions not being involved, uh, in the test. Right? So, so it’s like, I I have not,  

Paul    00:48:00    Go ahead. I’m sorry.  

Ole    00:48:01    Oh, so, but, but by that I’ve not solved the watching problems of how to kill the humus, right? Right. There’s the, that that assumptions,  

Paul    00:48:09    But it’s assumption. Yeah. So, so you could tell a story, maybe I’m gonna rephrase what you said and tell me what I’m getting wrong. So there’s like a mechanism, you know, you have this top down control from prefrontal cortex, but it’s not actually controlling the early visual cortex. There’s an inherent, uh, let’s say self-organizing mechanism, mechanism, quote unquote in, in like, let’s say the early visual cortex that then just kind of allow, it’s allowed to be activated within itself. That then, so it’s a sort of internally self-organized, driven way of inducing alpha and shifting attention.  

Ole    00:48:44    Yeah, yeah. Yeah. It’s cool. So you, yeah, you engage one region and through some sort of lateral mechanisms, uh, regions not involved, that’s when ang trees.  

Paul    00:48:55    So I wanna talk about, uh, frequency tagging. Uh, this is a, yes, it’s a really interesting method to use, and one could say disturbing on some level, perhaps, um, <laugh>. So, so what is frequency tagging in terms of oscillations and, and how do you use it?  

Ole    00:49:11    Yeah, so, so, so, um, it’s something we are quite excited about because, um, what we can do is frequency tagging above 50 hertz. So frequency tagging has been used as slower frequencies. Um, and the idea is that, um, there might, you might present a participant with a visual display. There’s two objects. One you flaker at 10 hertz, and the other one you flaker at 15 hertz. And low and behold, you can see a response in visual cortex through the flaca at 10 and 15 hertz. And then when you attend to the 10 hertz object, you see an increase in that signal, right? Mm-hmm. <affirmative>. So that works like a charm. The only issue is that is really annoying to see these flickers mm-hmm. <affirmative>. And of course, if you’re interested in l oscillations, uh, then if you start to flake at 10 hertz, you, you everything get messed up. Yeah.  

Paul    00:50:00    You’re blinking a lot and Yeah.  

Ole    00:50:02    Yeah. Uh, but what we have been able to demonstrate is that the visual cortex, um, response to flickers up to 80 hertz,  

Paul    00:50:12    But you don’t, you have like a super, super fast, um, projector though that goes like way above, uh, 80 hertz, right? Yes.  

Ole    00:50:19    Yeah. Yes. Uh, we, we, we have a, a so-called propex projector that can go at 1,440 hertz. Um, so, so that’s really cool because we can sort of manipulate every pixel on a screen at arbitrary frequencies. Almost  

Paul    00:50:34    Crazy. Yeah.  

Ole    00:50:35    Yeah. Um, so, so, um, what we then can do is that we can have multiple objects in visual scene, and we flagger them at different frequencies, and then we see the response in visual cortex and that response. Uh, and, and as you also mentioned, that flagger is invisible, right? So it’s above f flaca fusion. So, so, so you don’t see that manipulation. Yeah. And, and then we can have, uh, multiple objects in the visual scene and, and then we can sort of, um, uh, manipulate attention if you are, um, and then measure the response. So, so the, the question is, so, so why do we do this? Right? So, so, um, the thing about these fast figures is also we can sort of measure the changes in neuron excitability on quite a fast time scale, right? Um, because, um, if, if we flagger at, at 60 hertz, um, a cyclist, if it’s 15 milliseconds, right? Mm-hmm. So within 30, 20, 30 milliseconds, we can see if the tension is, is changing. Um, so we do this because we want to move, uh, towards, in investigations using more natural stimuli where people can also cate around. Hmm. Um, so what we now can do is to also look at the frequency taking, um, two objects before people are moving their eyes to that object, and then see when people start to move their attention. Um, and, and, and how much attention is being allocated before people move their eyes.  

Paul    00:52:13    So again, the idea is that you can tell, um, about the attention because more of the, of the, uh, frequency tagging more of the, uh, the, uh, the hurts that from the side that you’re gonna intend to attend to gets through, because the other correct frequency that you’re tagging at is, is, uh, repressed.  

Ole    00:52:35    Yes. Yeah.  

Paul    00:52:37    Um, yeah, you, you mentioned the, um, so well you mentioned being able to predict, um, well first, let me back up because, you know, we’re talking about timing a lot. And then, and then I started to think, you know, we think of attention as just a, uh, a function, but does attention have a timing? Like how quickly we can allocate our attention? And is that, is that a, I don’t even know if that’s a, a story.  

Ole    00:53:04    Yeah, no, uh, it is a great question because it’s something we have been really bothered about. Um, and the thing is, we do our good or attention experiment. So we pop up a queue, turn to the left, the queue is on for a few hundred milliseconds, then we wait a second or two, and then we pop up some objects for a few other seconds, right? Mm-hmm. <affirmative>, however, um, so it says, so, so we do our task over multiple seconds, um, and we tell people not to, um, to, to move their eyes, right? Yeah. However, in daily life, all what we do is to move our eyes around, right? Uh, so there are three to four, four times per second, right? So we have get thousands and thousands of cades, um, within not so long time spans. So, so now it turns out that, that, um, of course some research has been done on that, right?  

Ole    00:53:57    But if, if you think about what we know about mod, uh, attention and modulation and brain responses and stuff is always over much shorter timescales. And these shorter timescales do not fit well with something that happens that has to happen within, say, 250 milliseconds, right? Yeah. So if you move your eyes three to four times a second, you have 250 milliseconds to play with. So within that time window, your attention, you, you, you first have to sort of process what you fixated, but then your attention also has to move to, to where you want to move your eyes. And there might be several objects you want to select from, right? All is to have, has to happen within 250 milliseconds. Um, and that to me, that’s a big conundrum. And that also means that you need the tools and techniques to be able to study what’s going on in a brain with a, with quite a fast, uh, timescale.  

Paul    00:54:54    But this, this is, uh, I thought you were just gonna go right into talking about your pipelining, um, yes. Model or mechanism. So let’s talk about that before, before you do that. Um, you know, so we move our eyes with saccos, which are these ballistic large eye movements. They can be larger, kind of small. And we talked about how that was, uh, locked into the alpha, um, phase, uh, is the, is the same story. I should know this. I bet, I bet a previous colleague of mine already published something about this, um, our, our micro sakas. So we, we were constantly making what are called micross as well, and that’s like these little jitters Yeah. Of eye movements, right? That, so that your whole world scene doesn’t change because you’re barely moving your eyes. But are those also locked, uh, to alpha, do you know?  

Ole    00:55:39    Um, yes. Okay. So, so yeah. Yeah. So that, that’s worked by, uh, fake van Ada and Kean Obra, where they are able to show that, um, if you sit there and you, you attend to the left, you simply cannot help make these small micro circuits to the left. To the left. Yeah. And yeah, and, and, and then your alpha oscillations also modulated. Okay. Uh, so, so, so, so, so, and there seem to be a correlation there. Uh, so, so to me that speaks to a coupling between the patic system, the Patic motor system, and the l l oscillations. Um, and, and, um, so, so at first that might sound a bit mysterious because we thought alpha was a visual rhythm and so forth, but then thinking about it, of course, this cation system and a visual system have to be tightly cobbled. Sure. And they also need to be typed in, in terms of their processing. So it’s a speculation, but we would like to think that the l oscillations are very much involved in, in doing that timing.  

Paul    00:56:48    Man, my, seriously, my previous ad advisors are gonna kill me for not just knowing this off the top of my head. I’ll edit that out. I’ll edit that out. <laugh>, I’m just kidding.  

Ole    00:56:56    Good. Yeah, no, no, but that, I mean, these, these are new findings, right? Yeah. In, in a sense that, that, um, it, it it’s relatively recent that people are starting to look into micro cates. Yeah. And, and the oscillations.  

Paul    00:57:10    Yeah. Okay. Yeah. I, I had, uh, colleagues in my lab looking at micros, but I, I, I couldn’t remember if they tied them to oscillations. So I’m, I’m hoping not now. I’m gonna have to look it up when we’re done, uh, speaking here, <laugh>. So, um, okay, so you were, we were, when we were talking about attention, you were talking about the allocation of attention and being able to predict where attention, um, would be allocated by, by looking at these, uh, alpha. Um, and then we were just talking about, you know, the timing of attention, and you said that that was a big problem. And that brings us, uh, I think this is a good time to talk about that pipelining mechanism that I mentioned a moment ago. So, um, yeah, what is the, what is that pipelining mechanism and how does it relate to the timing of attention? And, and, uh, maybe, I don’t know if this is, we could segue into also the studies on reading that you’ve done.  

Ole    00:57:58    Yes, certainly. Yeah. So, so, so I mean, the setup for this is 200 fill 50 milliseconds between each Cade. You have to squeeze in both the processing of what you fixate, and you have to squeeze in the planning of the cade and the pre-processing of where you want to move your eye. How can that all happen within 250 milliseconds, right? So, so, um, we think pipeline is involved. So, so, so, so what do we mean by that? So imagine two people doing the dishes. One is washing and then passing on a plate to the other person who’s doing the drying and then putting the plate aside, right? Um, so, um, of course, if they do this in a foolish cereal way that, um, the guy washing is not gonna take next plate before the guy drying is done, um, it would be slower as compared to if they somehow work in parallel.  

Ole    00:58:55    So when person two is drying, then person one is picking up the next plate to, um, to, to, to wash, right? Mm-hmm. <affirmative>. So, so this, this is, I mean, uh, it, it’s, it’s this sort of the essentials of, we think about pipelining. So there’s several processes going on, but at different levels of the hierarchy. Hmm. So the way now to, to think about it for the visual system is that, um, it could be that you fixate on an object. The first, the, the, the low level features are processed maybe in, in, in before, right. And then that processing of, of that object is moving onto selective cortex, and it’s then being categorized as, um, an animal that then leaves open before to process the next item, um, which then can move on to object selective cor cortex, which is, is now done with the first object and so forth.  

Ole    01:00:00    So exactly like washing and drying, um, in the visual hierarchy, um, there’s processing of multiple items, but at different levels of the hierarchy. So that allows you to both process what you fixate at, but then it also opens up the resources to when that has been processed in the early visual regions to do the previewing, um, in early visual region for where your eyes are gonna move next. And in my mind, it’s simply not possible for the visual system to operate unless there was some sort of pipeline in going on. Mm-hmm. Because there’s simply not time to do a item processing sequentially.  

Paul    01:00:45    Yeah. So, so that, I mean, we, we often, you know, <laugh>, uh, it’s it’s old news that the brain operates in a parallel fashion, right? Yes. Yeah. But then when we think of something like the, uh, ventral visual stream, we always talk about it in terms of, well, first V1 processes, lines and edges, blah, blah, blah. And then it gets up to higher, you know, objects, and then we can categorize it, but then we never talk about the dust that’s left behind. And that’s it. It sounds like a sequential, um, processing. So first you see the chair and then nothing happens. Yeah. And then you see the car. Yeah. And then nothing, nothing happens. But, but I like this idea of, you know, at every step, um, in the wake of some sort of visual processing, there is then an opening of resources to do other things and what you’re saying preview, uh, other things.  

Ole    01:01:32    Yeah. Because I mean, there’s also the possibility that everything is going on in a fully paralleled process, right? So the two objects, the cat and a dog is being processed at the same time and sort of passes through the hierarchy, but that also creates a bottleneck problem due to the, due to the hierarchical organization of, of the venture stream. Mm. And and that’s why I would like to think that this pipelining mechanism allows both for sequential and parallel processing at the same time. Is  

Paul    01:02:02    This, go ahead.  

Ole    01:02:03    Yeah, I should say this. This is so far, um, speculation, right? So we don’t have the evidence for this yet, but that is what we are trying to, to establish now. But going back to the, sorry.  

Paul    01:02:17    No, I just, so I don’t know if you’re gonna talk about reading, but, uh, it just made me think of, you know, yes. The, um, like the convolutional neural networks that you talked about that you guys are working on and implementing oscillations in them, um, is, is pipelining a part of that story as well?  

Ole    01:02:30    Yeah. Okay. Because yeah, because the next question comes like, um, so just again, imagine the two people doing the dishes. I mean, they better be coordinated, right? Otherwise things goes wrong and, and plates would drop. So how is that coordination achieved? Well, then we think it’s by means of these health oscillations that, that, that’s, that’s, that’s, that’s doing the temple coordination. And earlier you mentioned traveling waves. So we would like to think that there’s this traveling wave in a venture stream that sort of carries the representations through.  

Paul    01:03:12    Gotcha. Okay. So, uh, I, is any of that published on archive, but I haven’t seen that, or, or have you published anything on that?  

Ole    01:03:20    Yeah, so, so we, we, we have a, a a a six paper, uh, where we speculate  

Paul    01:03:25    On this. Ah, okay. Okay.  

Ole    01:03:26    But, but again, uh, we are still in the process of trying to, to find the empirical evidence.  

Paul    01:03:33    Uh, do you want a postdoc? Are you hiring? No, I’m just kidding. I’m reading <laugh>. Um, so this reading, uh, work that you have done, and this maybe leads naturally in into talking about the reading. Um, one of the, one of the take homes is that faster readers are, um, better at previewing the upcoming word. Yes. So can, yes. I’m a really slow reader, so that’s bad news for me. But, uh, but maybe, uh, well, I’ll ask you later about comprehension and how that, uh, comes into the story as well. But, um, yeah. What have you found, uh, in terms of reading and predicting upcoming words when you’re looking at a given word, et cetera?  

Ole    01:04:08    Yeah. So, so that, that’s back to this thinking about the pipelining and the frequency attacking mm-hmm. <affirmative> mm-hmm. <affirmative>. So, so, so now turns out that, that, um, we want to study sort of natural vision and visual, uh, exploration of, of, of natural scenes and how you comprehend those, it turns out to be, um, difficult, right? Because, um, oh, the objects are not well quantified and stuff like that. So going to reading is actually, um, also a good stepping stone for that, uh, because it’s easier to, the, the words in sentences are, are better quantified, and you only make a Kate’s, uh, in, in, in, in one dimension, right? So, um, what we have been doing is to, um, first look into, um, uh, the psychotic times. And it now turns out that from the i movement literature, um, it has been started for many years, um, how people are doing previewing.  

Ole    01:05:07    So basically, if you focus on word number N and you have word n plus one, how much do you preview this before you move your eye? Now it turns out that how long you fixate on word N is independent on the word frequency, or how difficult the n plus one are out there that you are previewing. Hmm. So that has resulted in different models being sort of quite sequential, uh, in, in, in, in, in, in the nature. So, um, we sort of challenged that sequential view of reading by using the frequency tagging, because what we are doing is that we are flickering this word out there in peripheral, in the perfo, the n plus one word, that you’re about to move your eyes to at 60 hertz. And what we see is that, um, for more infrequent words, the more difficult words, there’s a stronger frequency taking as compared to more easy words. So this suggests that if there’s a difficult word being out there in peri, there’s more attention being allocated to that. So this is, uh, to us evidence for the notion that, um, you’re doing the powerful well processing, doing reading. So it is sort of a more, um, um, so, so, so, so it sort of forces a more parallel view Mm. Uh, of, of, of reading.  

Paul    01:06:30    I, is it a tension or previewing, right? So, so you’re, you’re going along, you’re, you’re, uh, phobia, it’s like you’re, you’re reading the word. Yeah. Claustrophobic. No, I’ll say that. You’re reading the word. Uh, I, um, and then in the periphery, you’re not at the word claustrophobic yet, but that’s a, an infrequent word. So the sentences, I am claustrophobic. Yeah. Uh, and claustrophobic is, uh, an infrequent word. And you’re saying that the attention attentional resources are greater, uh, to claustrophobic. Does that mean, so are you previewing it more then?  

Ole    01:07:02    Yes. Yeah. But, but it, in this case, I would say cohort attention and peripheral previewing is, is one. And same. So, so, okay. Yeah. Yeah. I mean, so what we are getting at is that we want to take the tools and experiences we have from investigating spatial attention, Cooper attention, and then apply it to more real-life examples, such, such as reading. And, and reading is all about fixating, allocating Cooper attention, making us a Kate.  

Paul    01:07:35    And just to throw this in there, because I think it’s an intriguing, uh, result, uh, we are talking about the timing of attention. And one of the things that you found is even if you present these words really, uh, slowly, that the, the attentive previewing happens just immediately, even if there’s, even if it doesn’t have to.  

Ole    01:07:53    Yes. Yeah. So, and, and, and this previewing it, uh, it, it happens within, so you fixate our word n even then the previewing effects start to occur hundred milliseconds in, right? So it really sort of changes how we think about the speed of the visual system when we are working with these more natural settings.  

Paul    01:08:20    And then really frequent words like, uh, the, um, it’s not that they necessarily get skipped, is it, is it just that they get, um, you don’t need to allocate much attention to them because they’re so automatic, because they’re so frequent that it kind of, well, how, what does that mechanism, like, how do we kind of skip over the word the, but still know it’s there and et cetera,  

Ole    01:08:41    Basically. Yes. So, so, so, so you, you would not linger too much on the, when your eyes actually move to that word, and often you would even skip it. Right? But even skipping that word also implies that it, it has been previewed. Hmm.  

Paul    01:08:59    Okay. So, so I, I started this little segment off by saying that, um, faster readers have better previewing ability, essentially.  

Ole    01:09:08    Yes, yes. Yeah.  

Paul    01:09:10    Our slower reader, I, I don’t know if you’ve tested for comprehension on the material being read, but our, those slower readers have better, uh, how does this interact with, um, the, the comprehension of reading?  

Ole    01:09:23    Yeah, no, it’s a, it is also a very good question, right? So, so I mean, there’s this joke about a guy who took a speech reading class and he said, I, uh, I read Warren piece overnight. And then his friend, like, ask him, so, so how did you like it? And he says, well, I don’t really remember <laugh>. Right, right. So, so that’s the point. Like, I mean, of course if you read really fast, you don’t comprehend, uh, too much, right? But, um, so, so, so in this particular study, um, we check comprehension, but not sort of on a created scale. But the typical finding is that, um, faster readers are also better readers and also better comprehenders. Okay. Unless you really push people very hard, so they then the change around. But in general, fast readers are also better readers.  

Paul    01:10:14    So, so as, again, I’m coming back to individual differences. I’m wondering if there are different, uh, types of people who read fast, but comprehend really well, some people who read slow and comprehend better than the fast readers, et cetera. Like, how much stock should we, how nuanced are these effects? Do I, does this mean that I should work harder to read faster so that I can comprehend better? Or does my comprehension lead to faster reading? Like, what should I do?  

Ole    01:10:41    Yeah, so, so I don’t know. Uh, so, so I, I have to say I’m not a reading expert, right? But we are collaborating with, with reading experts, but the questions you ask, ask something, some, some, some questions we would like to be able to answer with our techniques. So we also sort of can apply our approaches also to see if we can, um, come up with, with, with strategies for teaching people to read better and more efficiently. We also want to take this to, to, uh, to children to investigate when do they start to preview mm-hmm. <affirmative>. Um, and is that related to their comprehension? And can we through that also come up with strategies for how to teach them to, to possibly read better? Hmm.  

Paul    01:11:29    I, I was gonna make an analogy to like our conversation here in terms of predict, you know, I can predict what you might be, what you might, what I might predict that you’re going to say next, right? But it, it’s different cuz I have to wait until, until you say it to verify it. But when I’m looking at a scene or reading something, I’m constantly predicting upcoming, I can, I can preview the upcoming words. So I, so there’s that kind of prediction of what word might be upcoming happening, and at the same time you have this preview, uh, going on. Yeah. And as I’m reading, I’m, I read to my kids every night, you know, and they’re fairly, you know, simpler books. So I’m like really like predicting kind of far in advance the, like, what’s semantically gonna happen, conceptually gonna happen. And then I’m also previewing these words. H how far out does that previewing, um, happen, I guess out to the, as far as your para foveal will take you, right?  

Ole    01:12:26    Yeah. So, um, so, so we, we don’t know, but we think it goes more than one word, maybe two, three words. So that could also explain why you see the previewing effect at already at a hundred milliseconds. But, but you mentioned something which is also very important and that is, uh, prediction. So, so in I study you, you referred to, you were very careful about having words that could not be predicted because we want to take away the protection effect. Mm-hmm. Mm-hmm. However, of course, predictions are important. Um, so now, um, y pan, so she’s the, the postop doing these studies. She’s now running a study in which, um, she’s specifically looking into, uh, the prediction, um, and how that affect the powerful wheel processing. Hmm. And our hypothesis is that if you actually can predict what’s coming up, you, um, allocate less attentional resources through that word.  

Paul    01:13:27    So often when I’m reading, and sorry to ask you these, you know, have these personal anecdotes and asking you the questions, but you know, it happens very frequently, especially reading to my children that I will be, I’ll still have like a full line below, right? So I’ll be on the next to last line of a page, uh, and then I can, without reading all the words, uh, I can be on a word and I don’t know if I’m taking it in, in the periphery, but I can close the pa turn the page and still complete the sentence and the next line that was on that previous page. So then it’s almost like it’s, uh, staying in my working memory or something. So you have this working memory component in addition to the previewing and the predicting.  

Ole    01:14:06    Yeah. So, so I, I I I, I guess there’s the previewing and, and then also sort of reading ahead, uh, silently before you, uh, sort of speak out loud what you read, right? And then there’s a working memory coming into play as well.  

Paul    01:14:22    But can you read ahead in the previewing, I suppose you all, I suppose that that’s what previewing is in some senses reading ahead.  

Ole    01:14:29    Yes. So, but the, the, the question is whether, how much you read ahead without moving your eyes, right? So I mean, you can also move your eyes and read the sentence and then keep it in working memory and then, uh, sort of say it out loud, loud,  

Paul    01:14:43    You know, there are the scientific journals that print and old, like classic books print in these like really narrow columns. Um, yeah, yeah, yeah. I don’t even, you probably know this, you know, what’s the difference between reading like that and not being able to preview as much linearly anyway out to the, to the right if you’re, uh, you know, if, if that’s the way that your language goes versus these, which is equally, um, uh, there are equal difficulties in the, um, publications where it’s really thin single spaced lines, but very wide, uh, margins. So you have to like, read, you know, 50 words for every line versus reading like four words for every line line in these like, thin columns. So, you know, does this speak to how we should print <laugh>?  

Ole    01:15:28    Possibly. I mean, so, so again, I’m not the expert here on that, but, but there must be some sort of an optimal medium where the columns are not too wide or not, not not too narrow, right? Personally, I find it quite annoying if the column is too narrow,  

Paul    01:15:42    Right? Oh, it’s like three words and every other line has a hyphen. Yeah, it’s, it’s, I wonder, I’m surprised they haven’t, uh, changed that. I won’t name publication names or anything <laugh>. So, okay. I was gonna ask you, I’m glad that we brought up, uh, predicting as well, because I was thinking about how to relate these findings to the transformer mechanism in artificial intelligence Yes. And these large language models and how essentially they can, they, well, so I’m sure you, you’re aware of the studies that, um, relate large language models predicting the upcoming word, right? And, and then if it has a high enough, uh, predictive probability, then you know, these, the ones that generate language will then insert that high most highly predicted word. And I’m sure you know about the studies in neuroscience, um, with M E G and e EEG and F M R I, um, being able, you know, doing this linear transformation to map on and say, oh yeah, that’s what our brains are also doing. Uh, we can decode our brain activity such that they’re predicting. So how do you think about this in terms of, you know, the predictive ca capacity of, of the transformer attentional mechanism? I should ask you about what you think about it attention as well, um, and this, this previewing, this para foveal, uh, ability. Because, so just to insert one more thing, you know, transformers, you know, you can do everything in parallel, right? So I don’t know that there is any advantage in previewing. It’s like they have infinite previewing or something.  

Ole    01:17:05    Yeah, yeah, yeah. I mean, so, so, so first of all, I mean these, um, large language models, right? I mean, they, they work brilliantly at capture statistical properties of, of the text. And I, I think they’re, they’re, they’re an excellent tool for, for, for also doing brain imaging type of, of, of neuroscience. Um, so, so if you look at sort of classical psycho linguist, uh, studies, um, is all about sort of making the perfect data set and controlling every sentence in the words for, for predict, for pre, for pre predictability and, and, and, and what have you, right? So, so, so, um, so when clink with they work, they, they spend a lot of time on making these perfect data sets. So that’s all well and good, but of course, if you want to study more natural language, um, uh, then, um, there’s an issue of how do you quantify the properties of, of the different words.  

Ole    01:18:06    And then I, I think tools like, uh, G P T two or three are brilliant at, at quantifying say the pre predictability of specific words, um, in, in, in whatever text you take, right? So, so, um, as you also mentioned, people are doing that now to sort of look into what signatures of the brain activity reflects the prediction in relationship to these specific words. Mm-hmm. <affirmative>, so, so as a tool, uh, for quantifying different properties afterwards in natural text, uh, they, they, they are excellent. Um, but I guess what we’re also hinting at is, um, is our brains doing something similar than, than these, uh, uh, large language models, right?  

Paul    01:18:52    Yeah.  

Ole    01:18:54    Yeah. Um, and of course there’s a lot of discussion about this, uh, topic these days. Um, certainly the architecture is very different, right? From, from I would imagine how the, how the neurons, they do it, but that does not necessarily mean that they might not have similarities in, in, in how they function.  

Paul    01:19:19    But take, take for example, previewing, right? So yeah. Should, should I think of preview previewing as a, so it’s, it’s a great thing because I can see what’s kind of coming up next, but it’d be even better if I could see, uh, the rest of the chapter all at once coming up next, right? Yeah. Um, and so should I think of previewing as a limitation that a large language model, for instance, overcomes, or should I think of it as something that’s like integral to our human-like intelligence and therefore should be considered more in terms, you know, like, like building oscillations into large language models or some, or, or into deep learning models, right? Or any type of ai, how should I think of previewing it in that respect, do you think? And attention,  

Ole    01:19:59    Yeah. At least sort of previewing a prediction when you’re reading, I’m, you’re probably also building up expectations for what would be coming up Yeah. Uh, in the next sentences. Maybe not word to word, but sort of conceptual from a more conceptual point of view, right? Yeah. So, yeah. Yeah. So, so, so, and, and, and then you sort of match those expectations, uh, with the actual text as that unfolds. Um, and then, then you might update your expectations if, if, if they’re wrong, right? So, so, but then that sort of gets into the realm of, of, of predictive coding, right? And how these updating mechanisms are operating.  

Paul    01:20:41    All right. So, um, in, in a moment, we’ll, we’ll have a little extra time, uh, so I can ask you some questions that some of my Patreon supporters, uh, have sent in. And also, I just have a few extra questions for you. But what are you, uh, what are you excited about these days? Are you excited about, uh, I’m sure it’s more than one thing, but, uh, is one of the things like building these oscillations into, uh, the deep learning type models. What, what else, uh, can we look forward to? What, what’s exciting you these days?  

Ole    01:21:07    So what’s is really exciting to me is a new type of e g system, right? So we talked about e gva, you measure the scat potentials mm-hmm. <affirmative>, but e g we are measuring the magnetic fields, um, generated by neurons in a brain. Typically, we use sensors that I merged into liquid helium, so they become superconducting. And we can measure these tiny, tiny magnetic fields. It works like a charm. But if we want to study children, for instance, the heads are too small for these image imaging systems, so they just bob around. Hmm. Um, furthermore of the, the, the, the sensors are also sort of a bit far from the frontal lobes because you lean back. So now it turns out that there’s a new type of sensors called optically pumped magnetometers. And, um, they, with those, so, so, so it’s, it is by an optical technique where you can measure also very small magnetic fields.  

Ole    01:22:05    And these sensors do not require to be, uh, cooled by liquid. He, so, um, several groups are now making these so-called OPM systems to, uh, measure the brain activity that like an imaging system would do. But you can put the sensors closer to the head so you get a better signal to noise. You also can measure better from frontal regions, and in particular, you can start to study children much better because you can adapt the helmets to the individual brain sizes of the children. So I’m quite excited about the prospects of now developing an O P M system for investigating children. Also, to get back to oscillations, um, we know that health oscillations, they change in frequency with the lifespan. So if you take like a, a, um, say four year old child, they might have the l oscillations being six, seven hertz, and then they accelerate to about 10 hertz when the child is like 10, 12 years old.  

Ole    01:23:08    Ah. So if, and, and if you were to go back to the causal issue of these oscillations, well, now if these oscillations are changed in frequency, uh, or that short lifespan, uh, they can also ask how does that impact, uh, uh, your, your, your processing, um, your visual processing, your attention on processing and so forth. Um, but we talked about reading. They can also use these system to now investigate previewing in children, and when children start to preview as a consequence of, say, maturity and when they learn to read fluently, and is that previewing important for fluent reading and so forth.  

Paul    01:23:51    But the, the speaking of, you know, causality, right? The developing brain, uh, all these synapses are developing and being pruned, and you’re str, you know, you’re still myelinating and the structure is slightly changing. So there’s still that causal interaction that is kind of an open question, right?  

Ole    01:24:08    Yes. Uh, does the, yeah. But, but we can still, I mean, if you have a hypothesis about what the alpha frequency is doing in a given cognitive task, um, then we can also ask how does, um, the performance in that task change as the frequency of these isolation are changing?  

Paul    01:24:27    I, I know you’re also interested in dyslexia and studying that, and what, what’s the, uh, hypothesis that do, so dyslexia has long been underdiagnosed, I guess, and I know you’re not a medical doctor. Yeah. But all my children’s friends seem to have dyslexia, and I don’t know if I do. Yeah, I don’t, I don’t. So many people probably had dyslexia and were just, were were not helped, uh, because it was under, yeah. Um, underdiagnosed are dyslexic not previewing, or is, is it a previewing malady? What, what’s going on? What, what’s the high, what’s your idea?  

Ole    01:24:59    Yeah. Also, also a very good question because there, there, there’s also this debate, uh, in a field that what are the root causes of dyslexia? And I, I think there’s a strong consensus that, uh, children diagnosed with di dyslexia, dyslexia has a problem sort of mapping the words onto sound, right? So is in that translation from the text, uh, to phonology, uh, where there are problems. Mm-hmm. And that’s also what they have been trained on. Then there are people who would argue that dyslexia is about spatial, um, attention problems. Mm-hmm. Um, so, so, um, now there’s different kind of dis dyslexia, right? So, so, so of course both could be correct. Yeah. But one going hypothesis is that, um, children bit this lecture, um, have to work hard on Eachs word in order to do this mapping. So they, oh, do not preview. So it’s not necessarily a spatial attention problem they have at first, but they pro process every word, uh, one at a time. So it’s a very serial processing. However, through that, um, they do not train themself in this sort of spatial allocation of attention during reading. So that would also explain why some people find spatial attention problems in children with dyslexia.  

Paul    01:26:32    So  

Ole    01:26:32    The idea, again, this is a hypo  

Paul    01:26:34    Go. That’s a, that’s a hypothesis.  

Ole    01:26:36    It’s a hypothesis,  

Paul    01:26:37    Yeah. So the idea would, um, because attention is a limited resource, and so if they’re allocating that limited resource more to each word, then they, then their previewing is disrupted.  

Ole    01:26:48    Exactly. Yeah. And, and then they don’t get trained in the same extent as compared to children not having dyslexia in allocating a spatial spatial attention. And then you might find a genome spatial attention deficit al be the root course is problems in, in mapping text and phonology.  

Paul    01:27:08    Oh, that’s interesting. So, Ola, thank you for spending time with me, and, uh, I appreciate you explaining so much of your work and talking oscillations with me. Here’s to oscillations in their continued, um, waxing in the pantheon of phenomena to study brains and cognition. So thanks for being with me.  

Ole    01:27:26    Yeah, thanks very much. It was great fun and, and a really good discussion.