Brain Inspired
Brain Inspired
BI 132 Ila Fiete: A Grid Scaffold for Memory
Loading
/

Announcement:

I’m releasing my Neuro-AI course April 10-13, after which it will be closed for some time. Learn more here.


Support the show to get full episodes and join the Discord community.

Ila discusses her theoretical neuroscience work suggesting how our memories are formed within the cognitive maps we use to navigate the world and navigate our thoughts. The main idea is that grid cell networks in the entorhinal cortex internally generate a structured scaffold, which gets sent to the hippocampus. Neurons in the hippocampus, like the well-known place cells, receive that scaffolding and also receive external signals from the neocortex- signals about what’s happening in the world and in our thoughts. Thus, the place cells act to “pin” what’s happening in our neocortex to the scaffold, forming a memory. We also discuss her background as a physicist and her approach as a “neurophysicist”, and a review she’s publishing all about the many brain areas and cognitive functions being explained as attractor landscapes within a dynamical systems framework.

Transcript

Ila    00:00:03    I practically fell out of my chair. It was just such elegant periodic grid-like responses of self. It was just a physicists dream. So you’ve got like a very large libraries on large closed line, if you will. And then you kind of got, as you moved through the world, you’re kind of moving along the clothes line and you’re also getting sensory data. And then you put campuses like the clip, a campus just clips the clothesline to the sensory data and clips it together. But this one difference, I think is fundamentally I tell my students that biology is fundamentally different from physics in one way.  

Speaker 3    00:00:50    This is brain inspired.  

Paul    00:01:04    Hello everyone. It’s Paul. That was the voice of Isla feet who runs the feet lab at MIT, where they study the dynamics and coding principles that underlie computation in the brain. That’s actually what our website says. So the hippocampus and the hippocampal complex is one of the most widely studied brain regions, especially these days after the discovery of place cells in the hippocampus and grid cells in the into rhino cortex, which is a major gateway to the hippocampus. And we’ve discussed place cells and grid cells and a host of other spatially oriented cells, multiple times on the podcast, um, which together are thought to encode. What’s often called a cognitive map and cognitive maps are what help us navigate the world. And recent work has shown that cognitive maps seem to help us navigate our own thoughts among the abstract concepts. We hold in memory and manipulate to do cognitive things, but the story of how place cells and grid cells get formed, uh, isn’t finished.  

Paul    00:02:02    And part of what we discussed today is ILOs work supporting the idea that grid cells, uh, are providing place cells with a kind of pre-built scaffolding, which the place cells can sort of latch on to in different ways in different contexts. That’s the very short version of the story, which ILA elaborates. We also discuss her background in physics and how that shapes her neuroscience approach. And we touch a little on a recent review. She coauthored, which is all about the dynamical systems approach of analyzing the attractor landscapes of low dimensional structures among populations of neurons in the brain. And in the review, she goes into more detail about all of the brain regions in which that approach continues to be successfully applied. Show notes are at brain inspired.co/podcast/ 132 on the website. You can also find out how to support the show on Patrion. Thank you so much to all my Patrion supporters as always.  

Paul    00:02:58    And I’ve had a few people ask me when the brain inspired neuro AI course will be available again, that is happening very soon. It will be available from April 10th through 13th, then it won’t reopen for a while. Uh, so if you want to learn more about that, go to brain inspired.co and click the neuro AI course menu button, where you can also sign up to be notified when it becomes available. And for this round, I’ll have a few free videos about some of the open questions and what’s missing, uh, in AI and in neuroscience. So check that out if it’s of interest to you. Thanks for listening and enjoy Ella. Hello, you have a physics background. Do you consider yourself a physicist first now, or a neuroscience? What, what do you consider yourself these days?  

Ila    00:03:46    That’s a great question. I guess a neurophysicist if there is such a thing.  

Paul    00:03:50    Oh my gosh,  

Ila    00:03:53    No, no. I’ve heard other people use it. I think that what that means is I think of physicists as people who have a certain perspective and a certain output more than, you know, a certain set of problems that they work on. So I feel like anything that you try to understand the natural world from a perspective of, um, you know, we’re using things down to the simplest possible models, but simpler and I sense we’re right. Those kinds of things. Uh, those kinds of approaches, I guess, I, as the physics of the natural world. And so in that sense, I guess, uh, the brain, as much as it does crazy complicated biological logic, but it’s a physical object in the natural world. So I think that, uh, you know, it’s both physics, but I’m very much a neuroscientist. I think all the problems that I’m interested in are very much tied to neuroscience or related to science in some way.  

Paul    00:04:49    Well, I know that one of the things that you’re interested in is memory. And I was just trying to recall, uh, I believe it, it was about 20 years ago. I went to, I believe one of my first SFN and I cannot recall as human memory is a little imperfect and mine is extremely imperfect, but the, the like presidential  

Ila    00:05:08    Keynote.  

Paul    00:05:11    Yeah, that’s true. Yeah. That’s very true. Anyway, that the speaker was a neuroscientist, not a physicist. Uh, and he, and this is like the big beginning talk at SFN Scottish fellow. I can’t remember his name, but he was saying that we we’ve come to the limits of our abilities as biologists in, in the neurosciences and neurobiology. And we need to hand it over to the physicists. And this was like, uh, you know, 20 years ago. And I guess since then, there’s been more and more physicists coming over. I mean, neuroscience is an agglomeration of many, many different disciplines, so anyone can really come in. Do you find yourself surrounded by more and more physicists these days or physics, physics, minded people as you would put it? I suppose?  

Ila    00:05:57    Uh, yeah, that’s interesting. That’s a very interesting question. There’s definitely been a lot of the computational and theoretical side that there’s a lot of physicians who come into the field, but they’ve also been a lot of, you know, a lot of the technology development, um, you know, how some imaging microscopy, a lot of those tools and techniques that even at Mariah, right, those are developed in physics. Uh, you know, MRI was, um, you know, throughout LA and Ramsey in physics. And so I think a lot of the tools that we’re using, um, that are not on the genetic and sort of, uh, molecular biology, biological engineering side are, are, you know, come from physics. And so I think there’s definitely an important role of physicists in biology, I guess, even Seymour Benzer and various others, you know, had like a physics background, physics influence, uh, I don’t know if I’d call it like being surrounded by physicists, but as you said, nice, I think it’s more like, um, a lot of physics minded people who are thinking about systems level problems.  

Ila    00:06:55    I think that one difference between biology and physics is that, uh, physicists are maybe because of the history of studying us in school, mechanics are really acutely aware of this problem of emergence, right? The fact that you can have a lot of entities that are relatively simpler working together in large numbers, and you can suddenly get behaviors out of it that you couldn’t predict. And you couldn’t tell stories about, um, you know, from a smaller aggregation of those things, right? So just like many molecules of water can make this thing it’s just fluid and wet, and then you lower the temperature. It freezes, I guess, right? Those are things I think about as emergence,  

Paul    00:07:37    But if I, but if I tried to freeze you, you would resist because you’re autonomous, right. And you need to survive. So that’s not a different flavor of problem. A lot of that, the traditional physics problems, even with emergent properties and have to do with in, uh, inanimate objects essentially.  

Ila    00:07:55    Yeah, no, that’s, that is true. I guess that is fundamentally the question of, uh, agency and, uh, you know, this, this, this feeling that, you know, I am the master of my fate and the master of my ship, and  

Paul    00:08:12    I  

Ila    00:08:12    Could have an opinion about what happens to me. I guess those are definitely things that feel almost beyond physics. And it’s, uh, I mean, it’s fascinating. I don’t know. I mean, I guess these are things that philosophers have asked about along, you know, for a long time, right? Like is, you know, should be thinking about, you know, the cells of the ego and the soul, all of these things are these different from the material that makes up our brains. And I didn’t know, I guess as a physicist, I have to believe, and as a scientist, a natural scientist, I have to believe that somewhere there are, you know, physical stop straights of all of those feeling and sense of agency. And  

Paul    00:08:52    So you don’t have free Wilton.  

Ila    00:08:54    I definitely do not have free will. I do not believe it’s my fault just to get that totally clear. This is resettling big philosophical questions.  

Paul    00:09:06    I wonder if we mean the same thing about freewill, we’re not going to, we’re not going to go down that road though.  

Ila    00:09:09    We’re not going to go up. Yeah. I just, I, yeah, but I guess physics tells you that it’s either, you know, either things that’s the cast stick or there, or there, you know, you can, it’s all identical system and maybe that’s the Catholic, uh, stuff, but then you can’t predict what’s going to happen. Um, so there’s certainly randomness in behavior, so you’d have predicted, but that doesn’t necessarily mean it’s free. Well, people,  

Paul    00:09:32    Uh, on your website, uh, it states that your group seeks to understand high level cognitive function from the bottom up and that’s in bold. So this is a little bit related to physics. I don’t know if you would call physics that bottom up because it’s, you know, traditionally reductionist, but that’s kind of a bottom up approach as well. What does bottom up mean to you, uh, from, from the point of view of how you study, uh, cognition?  

Ila    00:09:57    Yeah. Again, oh, I think that, uh, so, so, so there’s many different approaches to sending the breed, I think like totally complimentary, right? So, you know, the one hand people study, you know, molecular things like, you know, genetic influences in the brain, all that stuff. And that’s definitely bottom up, I guess to me, bottom up, maybe needs something a little bit less, uh, bottom up, maybe something at the neural level and activity gives rise to like circuit level outputs and how circuits, you know, contribute to behavior. So that’s what I need my bottom up, but, uh, I think there’s a fundamental, and then of course the people who study top-down questions, they start from the top and in machine learning and  

Ila    00:10:39    The top would be like, you start with a problem. And maybe you formulate from an engineering in an engineering sense model of how, you know, what could be a good solution to this, this, this, this question, right. What would be a good way to accomplish this function? Um, and then build models and then maybe then search, right? Uh, for, you know, once you have your model you search for, is there a counterpart to that, you know, to that solution in the brain. And I would say that that’s top down. And I think that without all of those, it has to be an all of the above strategy. One thing that I’ve learned about biology is that, you know, it’s never one or the other and it’s, uh, you know, if you ask a question about the, about biology, it’s never this or that it’s usually vote. And I think that also for spending the brain, it’s going to be both,  

Paul    00:11:23    It has to be right.  

Ila    00:11:24    It has to be right, because if you’re, if you’re in the forest and looking at molecules, if you don’t ask how they ultimately affect behavior, then, uh, I mean, ultimately, why are we doing neuroscience? I think every neuroscientist will agree that ultimately they want to explain. So in a sense, you always have to make that link. And the question is, are you linking it by going from the bottom up or are you linking and going from the top down? Um, and you know, uh, I think, uh, you know, why should you only do it one way you should really come at it from both directions and then link, um, where, where you can make those links. Uh, it’s important to do that. Now, this is one difference. I think it’s fundamentally, I taught my students that biology is fundamentally different from physics in one way, right?  

Ila    00:12:09    And this maybe gets to your question about, you know, agency and other things, um, in physics, we know how to make a hierarchy. So we know that, you know, uh, there’s sort of like light skills and there’s energy skills. So at the smallest scale, in the highest energies, you know, the, you know, particles matter, subatomic particles matter. Um, and then, but then we can say, well, if we are not concerned with those energy scales or those length scales, then we can replace, you know, we can sort of have a, you know, a phenomenological description, some kind of, uh, you know, coarse-grained description of what’s going on at the length scale that we’re interested in and then work on that length. But basically we don’t have to worry about the small length scales and whatever we’re finding on the larger length scales are not going to go back and feed back into the smaller scales, right?  

Ila    00:12:55    Like particles of course give rise to Adams. And then Adams give writes molecules and the molecules give rise to stuff. And then, you know, uh, stuff gives rise to like galaxies and all that stuff. Right. So, um, but, but there’s no way in which as far as we understand physics that galaxies and go back and like feedback on which particles exist. But I think that fundamentally biology, you know, genes, uh, can give rise to, um, you know, cells and circuits and, and behavior, but then the behavior ultimately leads to evolution and selection, natural selection eats back into which genes are then replicated and how they then affect the behavior downstream. So I think that it’s just this fundamentally like the height, the longest scales, the biggest skills, feedback skills and biology. So I just don’t think that in biology, if you have any principled way to discard it, you just don’t have the luxury to say that at this scale, these smaller details don’t matter. And I think that our role in neuroscience is to try to figure out, right, like when do you know that the small skills affect skills of behavior  

Paul    00:14:07    And at what temporal scale, I mean, there’s spatial and temporal  

Ila    00:14:11    Temporal scale. Yeah, that’s right. That’s right.  

Paul    00:14:14    W where do you, I’m going to ask you a little bit later, unless we want to talk about it now about dynamical systems and attractor landscapes, where does, is that bottom-up or top-down because that’s a huge concern field right now, right?  

Ila    00:14:28    Yes. It is useful. I would, I think of that as bottom up, I guess, because it’s, you know, I mean, you’ve got neurons and we are trying to understand how neurons can give rise to, you know, at the circuit level, um, some states and how the states evolve over time. And it’s not necessarily with a function and Martin writes this sort of, if you have a circuit and you’ve got connectivity and you’ve got, uh, feedback, uh, how does that affect the states of the circuit? And at least that’s how I think about it. And then I think, okay, given these dynamical sort of behaviors that the circuit can now exhibit, how do we map those properties of behaviors onto like tasks or computations that, um, you know, we know the brain performs so that, you know, make it useful for the brain in generating. So I guess I definitely think of it as bottom up, but now that we have theories and models and some insight into the brain actually using these kinds of dynamics and, um, fixed points and, uh, attackers, then maybe we can also approach it in the top-down, which is, we could say now, given this new problem that, you know, the brain, uh, is solving or this new question I’m asking about what the brain does and how does it do this certain cognitive computation or form of this representation.  

Ila    00:15:45    Maybe now I can, you know, invoke an attractor from a top-down perspective and say, well, this is the competition that requires a certain type of dynamics. So maybe let’s go look for, um, get visited factor in green,  

Paul    00:15:58    But do you, so, you know, having your theoretical background and your theoretical bent, I’d say, do you, do you think about brain processes first, or do you think about your, uh, toolbox your theoretical toolbox first and then look for brain processes to apply those tools to, or do you, do you see, oh, uh, hippocampus, what could that be doing and then what is available to apply to it?  

Ila    00:16:22    Yeah, I’m, I’m very much a questions person. I’m not. Uh, so I, uh, get interested in, in questions about, you know, how has the brain doing something, what is it doing? Uh, and then I try to find whatever to, I need to solve those, solve those problems and answer this question. So I guess, um, yes, so I, I, you know, and, and again, I think that, you know, people, people just problems in so many different ways. There, there are tools people, as you, as, as you know, like they build developed some really powerful tools. And then, you know, they look for, you know, what problems can they crack using harmful tools? And of course that’s been a very powerful approach in our field. And, um, um, and so I would put them into two of developer category, people who were primarily developing shiny new, really amazing technologies that allow us to do the kinds of things that we can do and ask. Um, and then I’m the person asking the questions I’m looking for, what tools might help me solve those. Um, and so I’m kind of a tool, scavenger  

Paul    00:17:25    Tool, scavenger.  

Ila    00:17:27    I scavenge whatever tools I can find to answer the questions and, you know, and, and, and the hope is the brain’s got like a lot of things up it’s lead, right? Like it forces you to maybe invent some tools. Then you ask the question, maybe, I mean, one question is if we have all the math that we need to be able to describe, you know, what’s going on in the brain, I don’t know the answer to,  

Paul    00:17:49    There’s always more math needed. That’s, that’s a problem, right? For, for people going through, you know, learning neurosciences, why don’t you just learn all math, because that will all be handy at some point. Right. But you can, but you have to pick and choose. And I know that you don’t think that there’s one right way to go through your, your study, your path, right. To where you get. But there, you know, math is important, especially these days. Maybe, maybe we’ll come, we’ll come back to what the exact right way to go through is because I know that you don’t think that there is an exact right path. Let’s start talking about cognitive maps. Uh, w when did you get interested in, in the hippocampus?  

Ila    00:18:28    Yeah, the first time I got interested in hippocampus was I was a postdoc at the companies that you can do to go visit. And, uh, I was there and my husband is a theoretical physicist. Uh, I, he had just completed his PhD in physics at Harvard. And I had gotten my PhD in physics at Harvard of the week at that  

Paul    00:18:47    Point. Um,  

Ila    00:18:49    Uh, and yeah, and so then we were looking for, you know, postdoc where we could be the same place and, uh, the theoretical physics Institute have these independent positions. So we both went there. Uh, Greg, my husband had, you know, a bunch of collaborators to work with, uh, uh, mentors and I had the incredible luck to have a position there and be a free agent and do my own thing, but also meant that I didn’t have somebody to work with. And it was, it was a difficult time. I was, you know, kind of isolated and, you know, casting round looking for something to do, finishing up some old work and by serendipity, and I’m not a big reader, I’m not a huge reader of the scientific literature. Um, I don’t know if that’s good or bad. I mean, there are pros and cons to reading, I guess, you know, you know, what’s going on if your bead, but if you read too much, maybe, you know, you know, maybe you’re too much caught up  

Paul    00:19:38    In the create your own thoughts. Yeah,  

Ila    00:19:40    Yeah. Yeah. I think I’m difficult. I mean, there is like, it’s interesting. That’s an interesting trade-off that students often ask about how much should I be anyway for better or worse? I didn’t read too much, but I sporadically read and I happened to be reading, um, some, you know, ending up the morning meeting. Um, he wasn’t catching up and I saw this paper that up here in nature by Torco half-day at Everett Moser, uh, it was their first report of red cells. And, uh, that, you know, it was just, it just, and it was just happened to be within the week of their publishing the paper. And I practically fell out of my chair. It was just such elegant, you know, periodic grid, like responses of self, um, as a function of position, this variable, that’s not periodic it’s local, and you’ve got this periodic red response of cells. Um, it was just a physicists dream, right? Like how on earth does a crystalline who has thoughts arising from this? You know, really what we think of as a messy biological squishy. Yeah. So it was, that was it. I mean, I think I remember that same day. I just wrote down three questions about what I’ve read on grid cells. And that became my research agenda in a sense for the next,  

Paul    00:20:56    Can you just close your eyes, throw a dart, open it up and it’s, it’s, uh, uh, the Moser, the, the nature paper tracker cells. Yeah,  

Ila    00:21:04    That’s right. I mean, I had some time there to think about what I might want to do and I wasn’t sure. And that it was, if it’s off,  

Paul    00:21:11    It was clear. And so then you’ve been really focused ever since then. It seems, uh, on grid cells, hippocampus. And  

Ila    00:21:18    The reason, the reason also I was interested in grid cells was, um, so one was this unique phenomenon of their, you know, periodic responses, so appealing to physicist the physicist in me. But then it was also the fact that you had these spatial navigation circuits, I think are like a microcosm of, uh, cognition, more broadly written. Um, so, uh, you know, cognition involves memory. It involves like, you know, representing, you know, extracting like some, uh, you know, latent information, uh, from complicated sensory data about the world, right? Like for example, theory of mind is I’m extracting, I’m looking at all your actions and stuff, and I’m, I’m extracting like your mental state. Right. So similarly in navigation, you’re looking at all of this like scene data and then extracting some abstract variable, like your position in the world. Right. So it sort of, you know, extracting some, um, you know, a latent variable, um, from, you know, a bunch of latent variables really from, you know, complex observations, you know, in time maintaining a memory and then building longterm memories, which are like maps mapping the world. So there were all of these elements of, you know, my interest even in song learning was sort of like, how do we acquire this memory of, uh, of a tutor song? How do we then build a memory of our own song as, as we learned to generate it? Right. So memory’s always been at the core of that. And Chris and that’s kind of, um, sort of the, the high level question  

Paul    00:22:41    I see, well, then we should talk about hot field networks. Um, so, so hot, hot field networks are kind of classically thought of as a good starting point, at least for a model of memory because they settle into these, uh, attractor, low energy states, but, um, your work, and of course, others as well has shown that there are some less attractive aspects of hot field networks, uh, as a model for our own memory for our, well, I’ll just let you explain why are hot field networks, not enough to explain, uh, how our own memory works.  

Ila    00:23:17    Yeah. So hop to the models. So maybe I can, you know, very, really describe lot the models, right? So couple number models or, you know, models where you’ve got, you know, neurons, uh, model as simple units, which, uh, you know, some, their input from all the other neurons in the network, um, with some rate and then apply some threshold and then decide to respond or not respond depending on whether the <inaudible> and the beauty of popular networks is just the elegance of the theory. Um, there were the first sort of formalization or how a neural circuit can, um, you, two types of memory really like kind of have a long-term memory, which is, um, learning on the week. And then also have a short-term memory. It’s just maintenance of activity states that, you know, kidneys maintain. And so the way that the hospital network works is that the idea is that if you, um, drive the neurons and the network for the pattern, like sometime evacuation, zeros, and ones or minus lessons, um, and then you, the rule, the learning was really simple. The rule is just strengthen, uh, the weights between a pair of neurons, if they’re co-active and then, um, and then, and then make the weights negative. That being the pair, a pair of neurons is, um, those, those neurons are like, um, uh, opposite, right? So one is off the other side,  

Paul    00:24:34    But these are symmetric weights though. So it’s, it’s one weight, but it’s shared between each unit, essentially,  

Ila    00:24:39    That’s right. That’s right. These networks have symmetric weights. And so if the neurons are anti-correlated, then you should have negative rates between them. And if they’re correlated, then they should have, um, positive. And that’s basically the learning goal. I’d say you give a pattern of activation and then change the weights based on the correlation of neurons and that activity pattern. And then you can apply the second activity pattern and then change the way it’s a more, and then a five third pattern and so on. And what’s beautiful is then those patterns now become, uh, fixed points of the, of the dynamics. So it means that now in the future, if you now present the network with one of those, um, patterns as an input, um, the network will sustain that pattern it’ll remain in that state, you know, uh, even, even in the absence of the, particularly the input.  

Ila    00:25:24    And it’s also true that like, if you give a crappy version of the pattern, I noisy version, then the network will pattern complete and, um, you know, recall the whole pattern. And so that was the beauty. So these networks are called content addressable, uh, memories, because you, you, um, reinstate or recall that memory, if I just giving a fragment of the memory as the input. So the memory itself, or some fragment of the memory is the believable for the memory. So you can pull it up with the content. So the content addressable and the auto associative to these recurrent connections. And so, yeah, I think they’re just really beautiful that being super important in how we think about, you know, how circuits can maintain states like persistent activity for short-term memory and stuff like that. And, you know, there’ve been extensions like, so Hopton, that works with pretty sweet sets of, um, memories.  

Ila    00:26:16    Like these are like the speed patterns. And then there are these, um, you know, extensions of generalizations of those where, you know, if you have, uh, like, uh, there are ways to then build continuum, a continuous set of fixed points, uh, which are all stable activity patterns that then can be used to represent the continuous variables. So, um, say you want to represent a variable, like the orientation of a bar and the visual scene or the orientation of your head compass heading direction is moving around then. Um, you know, it’s very analogously to these, the screen out for that works there ways to construct ways that make the networks to be able to store and retain, you know, or persistently express the, the, the analog variable, um, and all of its values, uh, over time. So the hostel networks have been really powerful, but they’re also a little bit, um, maybe at least the discrete output numbers seem to have some, um, pathology.  

Ila    00:27:14    So if you look at, if you take off the network and you do what I was saying, which is you give a pattern and then you look at the correlations and then change the weights and keep doing this, then, um, if you have about end neurons of network, you can do this with about pattern. Same number of patterns are going on with your arms in the network. Um, so you can pack in, you know, more and more patterns and you just keep incrementing the weights, but then once you go past and patterns, then you add in one more pattern, single more pattern, uh, the whole network, all the previously memorized patterns, uh, just go away. So it though the whole network catastrophic really sort of fail.  

Paul    00:27:56    So, so if we were going to analogize that to the human brain, that would mean that we would need lots and lots of neurons to store what we, the number of memories that we actually can store. It’s not, not a feasible number. If we actually used a hot field network, uh, as our memory bank, it wouldn’t be feasible that some of us may be the host not included might be able to store many, many memories.  

Ila    00:28:21    Yeah. I mean, I guess the issue. Yeah. So certainly if you look at, so what was the area in the brain that was maybe hypothesized as a hospital network? So the classical area in the brain hypothesized to be a cop, the network was a C3 and hippocampus. So the subcapital sub field that has recurrent exercitation and is known to be involved in memory and the, uh, so in, and of course, you know, this is the famous, uh, patient Hm Hm. Had a hippocampus illusion. Uh, I’m sure, you know, you’ve covered this in various of your other podcasts, but basically when he had  

Paul    00:28:56    Probably  

Ila    00:28:56    You probably have, right. So when, when him got a damaged a campus, he was unable to form a new memory. Uh, and so the idea was that, you know, hippocampus C3 specifically is involved in, um, in creating these associative memories. And, um, and, uh, maybe the hospital model is a good model of seats. So now if you look at like the road in Deborah campus, um, where, you know, people have spent a lot of time studying hippocampus and in votings because, you know, rodents, uh, have, uh, you know, for all the various reasons that are the practical laboratory, but they have, you know, I mean, they’re, they’re, they’re great at spatially, you know, moving around, running around and making memories. So it turns out that, you know, hippocampus is generally involved in memories. Um, but it’s also specifically involved in spatial representation and spatial memory.  

Ila    00:29:47    So that’s another connection that me just general memory and then facial representation, it’s facial hair. So if a campus is involved in, in spatial memory and <inaudible>, uh, it turns out that hap when you place these animals mammal in, you know, uh, in a spatial environment and let them walk around then place cells, the cells in the hippocampus, uh, and, uh, firing it’s specific locations in the, in the arts. And, uh, so then, um, so then, uh, this was discovered by John, uh, Keith. And so he called him please cells because, um, you know, you could kind of look at the firing of these cells in a small room. And then if you know, cell a was firing, then, you know, you were at that one location corresponding to, you know, the, the, the neighborhood of places where that, that cell fires. And so turns out that all the principal cells, the pyramidal cells and hippocampus, T3 were all pleased, though.  

Ila    00:30:38    They all have, like, if you put the animal in enough environments, then, uh, you know, every cell that you record, principal, cell C3 look up. And so, okay, so now this is interesting because this is like fundamentally a memory area, but it’s got this it’s very much spatial because every cell expresses facial TV. And so now we can ask the question about capacity, um, which, you know, we were just discussing and help them metrics. And so it turns out that, you know, a good bound on the number of cells in the hippocampus in the rodents is like 10 to the six physical campus. And so if you have something like 10 to the six cells, a million cells, then, um, then we can ask the question of supposedly cells were tiling the world, right? Like they were tiling the space, um, as he went around. So, you know, these places are called platelets because each cell has one field and a sufficiently small space, as far as the, you know, it’s like a union model, GLA grandmother, cell, like representation for place. So if ours were one place at one place, not other places  

Ila    00:31:44    They do. Yeah. It says that’s a very interesting complexity, which we think get to next. But basically, like if you just consider that it, you know, it’s just like the simple code where, you know, once I’ll represents one location and we just dial all the world with this simple code, then each other that you’ve got like 10 to the three neurons per linear dimension, right? Like if you’re having duty space, so got 1,005 thousand neurons, and now each neuron has, you know, a field with right. Some resolution with which it’s coding the space like 10 centimeters, then you can do like 10,000 centimeters per linear dimension. And so that works out to not very much total mapping phase or map space. And so this clearly suggests that, you know, if we’re thinking about, um, the brain as, or the hippocampus C3 as like a Hopton network, you’re very quickly out of capacity, certainly.  

Ila    00:32:37    Um, you know, uh, you know, you can represent at most, you know, you know, if they’re in neurons to a million neurons, you can represent like a million locations with this resolution, which ends up being not very large at all. So something’s got to get right. And so that, that can, that doesn’t seem to work. So what’s the solution. And so you mentioned remapping, so, yeah. So it turns out that if, you know, recorded neurons in multiple locations, like in multiple different environments, then the same place cell can actually have feels, you know, one field in one environment and then a different field in a different environment. And if you look at the cell phone relationships that we play cell, so to place feels maybe overlapping and environment one, and then if you bring the animals environment too, it could be that one of the cells has a field and the other just doesn’t have a field, or it could be that now they have two fields that are far apart, right?  

Ila    00:33:30    So in other words, even though they were laughing before, they’re no longer overlapping in the second environment. So this also relationships are, are, you know, are scrambled. And this happens, you know, at the population at large. So it looks like you just kind of have a random shuffle of all the cells, and then you be, you know, um, you know, you read, uh, allocate the cells to fire different locations, completely unrelated to what they did in the person’s life. So this is just like, it’s really complicated, right? It’s not so simple and it’s not low dimensional representation campus. And then there was these beautiful experiments from Albert Lee who, uh, you know, had the idea to put animals in these very long tracks and the record that the statistics of place cells in long and see what happens. And I also mentioned Andre Fenton did beautiful experiments in two-dimensional larger spaces.  

Ila    00:34:16    And he showed that once you go from these small environments to much bigger environments, then suddenly the cells that seem to have a simple place coding, right? Like one place per cell, even in this single two-dimensional large environment. Now they pop, they have multiple fields that popped up, but kind of irregularly space, but now there were multiple fields. So now we’ve gone from like thinking, you know, these places that we call the place fields, because it looked like they were coding for a place are now suddenly doing something much more common peroneal. So in Albert Lee’s long crack experiments, he found that there were cells that had, you know, um, over 40 meter track had something like, you know, 20 field. And, uh, and the fields looked like they were kind of randomly organized. So now we’re talking about some kind of common tutorial Cody or each place though has some random, you know, constellation of fields.  

Paul    00:35:04    Okay. Are those the studies that also showed, you know, one place cell might have a lot of fields, whereas another place I might have very few and the distribution among those is seems random. Is that  

Ila    00:35:16    That’s right. Exactly. So that’s right. It was the same study. So he showed that. Exactly. So it seems like they, they were not able to discern a structure in the firing of the distribution of fields, uh, per cell, but the one structure, the one piece of a structure that was, that was preserved across environments. And, you know, for the first half of this long pack and the second half a longer ad was the average tendency of a place to put down the field. So some cells had a higher average tendency to put out fields and other songs have lower average tendency if you put that field. And that itself was like a long tail distribution. And so, you know, there really were, you know, many, many cells that had, you know, almost no fields. And then, you know, sometimes it had very many fields in that. Um, so yeah, exactly. So for all the world play spells and all looking like these things that have just been kind of very random, common tutorial coding, and so they’re building maps. And so maybe, you know, so the question is, is, is, is the word place fell a misnomer,  

Paul    00:36:20    Every, every, uh, created name is a misnomer. It turns out right. But then we’re stuck with them. So anyway,  

Ila    00:36:27    And they are evocative, you know, so they, they have a role, but yeah, exactly. And, but, you know, and I guess, you know, you know, I show of course, point to her heart. I can bomb and, you know, others who argue that, of course, because we know that humans have a campus is essential for just general memory and general learning and not just the spatial domain. Um, there’s no sense in which they can only be as a function of space. So it was just that, you know, if you were studying them through the lens of a spatial task of animals, physically moving around in the world, doing very little else other than just randomly running around in the world. And so by definition, almost we were just studying their spatial correlates without regard to what else they might be in code. And now when people record place cells, um, you know, on more complex tasks that involve contingencies or something like operative type behaviors where, you know, the Avastin, you know, make a decision and then make a choice and get a reward. Now they find a place that was really, there’s almost nothing that these places aren’t sensitive to. So it’s almost any task, a relevant variable, or even task irrelevant variable. There are plays those that have, uh, you know, coding or tuning to, to those variables as well.  

Paul    00:37:36    So if you were, if you were going to be in charge of renaming the place cell, uh, well, first of all, let me just guess, and then I’ll ask you what you would rename it. Would you rename it? The internal scaffolding cell?  

Ila    00:37:48    Ah, good question. So I would name, I would give that name to grid cells as the internal scaffolding cells. And I would call place, does the scaffold associated cells,  

Paul    00:38:00    Or  

Ila    00:38:01    They would be like a part of a scaffold. Yeah. I’d call them part of the scaffold network. That’s right. I call them the scaffold networks.  

Paul    00:38:07    I’ll let you think about this longer. We can come back to this scaffold networks. Okay. All right.  

Ila    00:38:11    Scaffold networks. That’s actually a colleague of mine who works in engineering. Um, that had a very nice description. Actually. He was talking about something else, but it really applies here. I think the scaffold network is like a clothesline, you know, and then it’s got pins, right? The clips where you lay the clothes out to dry. And then everything that we experienced in the world is clipped onto those clips and clips. And that’s how I think about the scaffold number. So it’s the global  

Paul    00:38:42    I thought you were. So did you come to think about, are we skipping? Are we skipping ahead too far to talk about the role of grid cells? Um, you know, do we need to describe more about place cells because the way you’ve described it now, now it’s making me think that you have come at this from, uh, trying to solve the mystery of place cells, but then you originally got interested in grid cell. So how did you, uh, come, come into this and maybe we can just get to the story of, um, what you’ve discovered and why you would rename it, the scaffold networks.  

Ila    00:39:12    Yeah, absolutely. Yeah. So we first started by working with itself and I feel like even the grid cells have, uh, you know, this complex looking geometry of their response and it looks complex because there are these multiple fields and they’re, you don’t want a lattice all that is, uh, I wouldn’t say that that immense structure almost in the end lens, real simplicity too. And that’s what it seems to be. It seems to be the case that the structure lends simplicity to our ability to understand them. Uh, and so then we felt like we had made enough progress at least enough to satisfy my level of curiosity, that I felt comfortable than asking the really hard question of whatever place someone’s doing, because they seem very, very, very complex because of this remapping phenomenon that you mentioned. Um, and so, okay, let me make one or two more points about place cells, right?  

Ila    00:40:10    So, so where are we? Right. So where we are is we’re saying that, um, you know, haka networks, uh, where, you know, the thought was that if there’s an instance of popular that bricks in the brain C3 and hippocampus would be a really good candidate for hopper networks. And now we said, well, look, facial learning is a form of burden if that’s a form of family. And so, um, we can then compute the capacity of these networks, um, by looking there’s facial responses. And we’re saying we’re really falling short, right? And pop networks have this property where of course, once you go past their capacity, it’s all over. Like there are these pounds that they’re like this blank slate, you start with a blank slate and you started writing the blank slate. And by the time you like, you know, cover the blank slate once over.  

Ila    00:40:59    And, you know, if you want to start writing, you know, have you seen this old letters from like the 18 hundreds where people paper was really expensive? And so they used to write, you know, they used to write their letters like, you know, horizontally across. And then when they fill the page, they would rotate it perpendicular and then they’d write across over it. Yeah. You should see those. It was like, it was a thing. It was like a, a widespread thing to do. But anyway, so if a camp is like, you start this blank slate, you write on it. And then once it’s full, it’s full, in fact it’s full and you cannot go back and retrieve once you filled it in, basically it magically erase it. Right? Like you can’t and, or, or it’s just like it  

Paul    00:41:38    Because there are no minimum at that point. Right.  

Ila    00:41:40    That’s right. That’s right. Or it’s like a proliferation of minimum, if you will. Like, it’s basically  

Ila    00:41:45    Too many minimum. It just becomes a spin glass. They’re just, you know, exponentially many minimum and you can’t control any of that. That’s what happened. So, um, so you can’t like, it’s basically like the whole board goes white instead of going blank and blank for your writing. Again, that’s totally bizarre. And it seems like we just don’t have enough cells. And, but at the same time you’re telling me, well, please, those are remapping. So, you know, I mean, there’s somehow like forming one map and then they’re forming a different map, but that can’t be right with cocktail. That works because again, like, you know, I mean having that whole set of maps is like a whole bunch of memories too. So there was, um, nice work by McNaughton and Skaggs showing how you could fit like multiple maps into like the same kind of network weights.  

Ila    00:42:32    But again, when they compute the capacity, it was multiple maps. That’s even worse. Like it’s a real problem. So if you want it to fit, you know, a number of maps, if the scaling is very bad again, and so, you know, something’s going on, right. So that was kind of a puzzle with HIPAA capital cells and, and kind of, I, I understood that this is a problem and hustle networks, aren’t quite doing it, but I didn’t know what to make of it and then be backed off. And then you were thinking about grid cells, um, continued to think about grid cells. So it turns out the grid cells have beautiful properties. And, um, should I, should I get into grid cells?  

Paul    00:43:10    Well, so in their simplicity, they’re beautiful or is it more complicated than  

Ila    00:43:15    It turns out that in their mechanisms, they’re very simple, but it turns out that in terms of their abilities, there’s a lot of richness. Um, and the richness has some beautiful mathematics. That’s kind of like very elegant from a faculty perspective, but also from the brain’s perspective, if it has some real use and then the grids all set up the scaffold that maybe I’ll say is like an alternative to help your model. So what is the grid cell? So these were, yeah,  

Paul    00:43:44    Go ahead. No, w um, refresh the listener’s memory because, you know, things need to be repeated. We we’ve actually talked about cognitive maps quite a bit, but, um, I think it’s worth repeating, like, because you did such a good job with describing place cells. Uh, let’s talk about grid cells then, and then we can get into your, your model and the explanation.  

Ila    00:44:02    So grid cells are these felons that, um, like place those, they like to fire at specific locations. You know, people are moving around the room. If mice are moving around the room, uh, rats and moving around a room. Um, even if they’re crawling around in a room, it turns out that in their brain, not in the hippocampus, but in the end around the cortex is a part of cortex. It’s a special, older part of cortex. It’s not the neocortex, which, you know, is all of the visual processing. It’s actually, you know, part, it’s the same kind of cortex called allocortex it’s related to the piriform cortex, which is a factory cortex. And then there’s Android, which protects. So anyone on cortex is really privileged because it is the gateway of all the critical information that goes into the account. So all of it gets routed through the internet cortex.  

Ila    00:44:52    So it is a sort of special path into this mapping system. So what is enter rental cortex do well, I mean, it does a lot of things, but there’s this one subset of cells that the Mosers discovered in the mid two thousands. Uh, and they have this remarkable property where, when I, as I said, if we’re walking around and they fired at these specific locations, but not just individual specific single places in like, place on Stu, but they fire at multiple different locations. And amazingly those locations are located on the vertices of a regular triangle of the title. And if the room size is, you know, twice as thing, then you just get the same period, like the same mat is, but it just has many more cycles. Right. So it just kind of, it times that space  

Paul    00:45:44    Different spatial scale. Yeah. By the way, I just want to interrupt because, uh, it’s, since it’s an audio, audio only podcast, the listeners, can’t see how giddy you look talking about grid cells. So a big smile on ELs face while she talks about and play cells. So I don’t want to leave them out, but it’s just a pleasure to say, sorry to interrupt them.  

Ila    00:46:04    Yeah. They’re amazing. I mean, it’s just, yeah. I, I feel like a kid whenever I think about the existence of themselves. So, um, yeah, so, so, um, if you look at a cell, it has this periodic farming, even though the ground, or, you know, there’s no cues in the environment that are indicating that, um, that, you know, th this is some periodic here that have a periodic arrangement of landmarks. There’s nothing it’s just an internally generated periodic representation. And what, I just want to point out how remarkable it is, because let me just describe the experiments that people do when, when they were quite these good cells. So they take a rat and they put it in a, in this drum, this, this, this, you know, this, the space that has, uh, you know, supplied bottom, maybe a meter across in diameter, it’s a circular enclosure, but some walls and the walls aren’t terribly high.  

Ila    00:46:57    So, you know, it, the animal can look over the walls and see the bigger room and see that there’s some orienting, Tuesday was a door and a window. And so that gives them some general orientation, but within the Spacebook, there aren’t really any space. There’s no spatial cubes. This is just a feature of this circular environment. Um, and the animals are just kind of running around and usually chasing things like chocolate chips, or some of them were cookie crumbles or Cheerios that are scattered. And so there’s kind of running around origin for the food, and then they’re doing things like they pause, they might stop them and scratch themselves. They might, you know, scratch, you know, they might like kind of put their paws up along the side of the wall and look up. And, um, and then they they’ll, they’ll run around at different speeds and in different directions, but every time the animal, um, comes to one of those vertices that doesn’t actually exist in the real world, but it arrives at the neighborhood of that project, that cell, that fired previously at that vertex part is again about working, even though the animal made this time, you’re running in the opposite direction, it may be running a different speed.  

Ila    00:48:08    And if it’s running in the opposite direction, it has a completely different visual input than it did when it was running that way. So somehow it has, um, you know, this internal, this cell has access to a good estimate of where the animal is with enough detail that it, it, you know, with fidelity lays down spikes repeatedly over 20 minutes, 30 minutes of random running rounds, you know, at that same vertex. And at each,  

Paul    00:48:38    I actually don’t know this, the answer to this, and I’m sure that you do, does this happen? Does the map to the spikes happen immediately? Or is it a, a gradual couple minute process? I feel very ignorant to not know the answer to this question.  

Ila    00:48:52    Yeah, no, that is an excellent question. And I think the answer is that those spikes that, you know, fall on the vertices of this lattice, you know, appear on the very first, you know, trajectory like the first part  

Paul    00:49:04    Of the  

Ila    00:49:06    State. Yeah. I mean, there’s some scatter, right. So they’re not, you know, I mean, the question is what is the precision of these fields? And so, you know, the fields have a, you know, a size, the blogs, you know, the, the, the neighborhood in which they fire is like probably about 27 years. And so, you know, plus a mine is to have a good estimate of their position plus, or minus.  

Paul    00:49:25    Yeah. But it’s like pretty damn quick. It’s like set right. When they start,  

Ila    00:49:29    It’s pretty much that when he started, yeah, that’s right. That’s right. It’s kind of exists right. From the get-go and yeah, exactly. And you can put them in a, in a novel environment and they’re, they go like, off, they run and they’re, you know, firing spikes, um, on these like trying to relapse patterns in a brand. Right. So it’s pretty incredible. So then the question is, okay, so, okay, so that’s one, so, uh, that’s oneself, just one cell has this triangle. Right. And then you can ask, well, what are, you know, are there other cells like that? And it turns out, you know, if you look nearby, so that cell, there are other cells that have the same lattice response, like the same periodicity, the same angle or orientation. Right. But the lattice, um, but the only are different by, you know, shifts. Like you can just, just originally take that pattern and just translate it.  

Ila    00:50:20    And that’s what they do, the responsibility themselves. And so, so it’s all constellations of this one, periodic, this one. So if you had, if you could, you know, record all of those cells would that same period, but different shifts, you, you would know exactly where the animal was in the room up to like, um, the, the periodicity of the animal, because it’s completely degenerated because once you’ve moved by one period, once the animal has moved by one period, it’s back to, you know, uh, the same firing field that, you know, if that cell was on and all the cells are shifted by that one period. So that all back to the original. So it’s basically like, it’s, it’s, it’s like, it’s literally like assignment, right? It’s this periodic thing, um, that is, um, you know, basically not giving you information about the analyst position other than, as like the space up here.  

Ila    00:51:11    Right. Uh, okay. So then, I mean, then there are mechanistic questions that, you know, are most like, why, how on earth, like on the side of a circuit that does that. And then there’s like the questions of like, why? Right. It’s like, kind of like, okay. So then it turns out that if you go further in the editor on the cortex and not look at the immediate neighboring cells, but there’s kind of the small, so the hippocampus is like a banana and the adrenal cortex has this long axis that kind of aligns with that banana. So if people along the long axis, then you’ll find other cells that are also grid cells that have a different period.  

Paul    00:51:45    So th these are different spatial scales along the, how’s it a dorsal lateral. It doesn’t matter along the banana of the,  

Ila    00:51:52    Yeah. This lengthwise along the banana. Yeah. You go from one end to the other and they come in clusters. So these discreet clusters of cells that have, you know, a few discreet, um, distinct spacings scales or periodicities, um, those with each other. So it’s kind of like, like from a coding perspective, it’s like saying, um, you know, I walk into a room and I want to know the time and, you know, instead of a single wall clock on the wall, that’s like a 24, like a 12 hour wall clock. Instead, what I’ve got is, oh, by the way, I should mention what are those periods, right. Like, so what is that scale? So it turns into the smallest scale is about 30 centimeters. And the biggest scale is about a meter and a half at most to,  

Paul    00:52:32    Yeah, this is in rodents or across all Fila. Okay.  

Ila    00:52:37    Yeah. But you can see that the smallest to largest, like it’s only a factor of two or three. I did not. And you know, rodents are prodigious. Explorer is over one day, they’ll go a kilometer or two looking for food, like each dimension. So they covered a lot of ground. So this is nothing close right into, like, this is not close enough to even the largest scale at two meters is nowhere close enough to this position in the world. So it’s like coming into this room and seeing a wall full of clocks. And each clock now has like, you know, one clock has like 11 minute period. And another clock has like a 12 minute period. Right. It’s kind of just minutes. And it goes around every 12 minutes and the next one goes around maybe every 17 minutes. And so  

Paul    00:53:21    You kind of have that because of the second hand, minute hand and our hand, however, you’re talking about a lot more hands and  

Ila    00:53:28    A lot more hands. Exactly. Add moreover, you know, seconds, minutes and hours, hands are like factors of 10 or more from, you know, it’s like a hierarchy, right? So each one is kind of like a final resolution of the other here. It’s kind of like having a bunch of clocks that have very similar, right. It’s like, it’s like instead of like, um, one second and 60 seconds and then 360 seconds or 3,600 seconds, I’m sorry. You instead have clocks that are like, you know, I don’t know, uh, you know, like six, 10 minutes, 11 minutes, 12 minutes, right? Like they’re almost all the same, but there are distinct. And the question is why, like how and how, you know, like the community code, you know, a time in the 12 hour day with, you know, those clocks and, and why on earth would you represent time in that way? Right.  

Paul    00:54:21    Right. So, so backing up to play cells, is it, I don’t know if traditional is the right word, but the, uh, recently traditional story is that the place cells themselves are, uh, in coding the grid cells, uh, the, the patterns in the grid cells and correct me if I’m wrong. Um, and what your recent modeling work and theoretical work has shown is that you guys make the strong case that it’s, in fact, the grid cells that’s encoding the place cells. So maybe, I don’t know if this is the right time to, uh, bring that home.  

Ila    00:54:53    Yeah, absolutely. Yeah. That’s a, probably a reasonable time. And we can talk about it more when we talk about like the dynamics of these consults, but yeah, so there’s this sort of these, so ultimately the anatomy of this area. So I mentioned that entering a cortex where the grid cells reside is this gateway to the campus, right? So all of the external sensory evidence, all of that census data is coming into the campus via the interim cortex, but it doesn’t just if anatomically the flow, we’re just entering predictive at the campus that obviously grid cells would drive place cells and there would be no debate, but it turns out that this is a very famous loop pathway. So, um, you know, and around the credits projected for the campus, and then it forms this thing called the hippocampus it’s outputs also through the internet. So in fact, a hippocampus outputs to the deep layers of mental health, according to then predict back to the superficial areas of entering the cortex.  

Ila    00:55:53    So the whole thing is one big loop. And so the question becomes, you know, in this big loop, who’s giving rise to whom, uh, and, uh, you know, or is there even such a thing can be, even answer that question. And so, as you pointed out, there was a couple of models as one model, which is that, you know, you can think of grid cells as arising from place those as like a pattern forming process on top, you know, on top of place else. And then there’s the opposite school of thought, which is that, you know, you can think about place cells as generated. If you combine grids as a multiple different scales, right. Then you can set up interference patterns where, you know, if you align grids of different periods so that, you know, they all have a peak, they at one, I’d say zero phase, right.  

Ila    00:56:38    And, and, but then they all have different, you know, th they’re, they’re all have different peaks at different frequencies, then there’s interference right at all, you know, because they have different frequencies. These, these periodic waves are interfering, destructively everywhere, but only constructively interfering where you line them up at zero. And so then, you know, if you sum all of those up, you would get one big peak at zero and nothing everywhere else. So this was like, this is the general idea behind how you can get these, you know, you can construct these periodic representations and then sum them up at different scales. And yet, like, so that’s like the opposite direction of models. Right. And the question is, yeah, which one actually obtains. And also you should really be asking yourself at this point, is that really what the brain is doing? Is it in the business of representing stuff?  

Ila    00:57:23    And then be representing stuff? I would say, no. Right. Like clearly, you know, th there has to be processing, right. It’s not just the brain, wouldn’t just construct a representation and then just re represent the information that doesn’t make any sense. So, um, right. So we have to think about like, you know, what, what is the role of those two of those two areas? Okay. So, so that’s a question, and I think that, you know, if you think about, okay, so I guess I can, I can sort of give you my view on this. And my view on this is I think that grid cells are, um, the primitives in this dynamics, they’re the perimeters of this process. And there’s many reasons to think. So from, um, from the dynamical responses of grid cells, from the fact that they’re present in sleep, um, and they maintain their structure.  

Ila    00:58:12    In-seat, uh, uh, and, and this was, um, you know, showing from studies by my group and other groups, um, uh, in 2013 and onwards. And then most recently, uh, sort of directly demonstrated from multiunit recordings, um, in the muzzle lab and showing directly that the structure of the states of the grid cell, you know, a single group of the cells of the same period in the eyes, in this very low dimensional surface and in four shape. Um, and that’s the prediction of these, you know, the models for the dynamical models of the grid cells. And it looks like the grid cells have those responses, not just when the animal is awake and, and doing it’s curiosity, spatial representation business, but even when it, um, and even when it’s running one D um, different distorted shapes, nevertheless, the good cell states the same, but the police cells are, as we observed already across different environments that we map in sleeping, they don’t maintain relationships. So that suggests that if this, a group of cells that are maintaining a highly structured relationship across all of these conditions, right, they’re just in variant and it’s present all the time with immutable responses and relationships, but this other area has responses that change dramatically from one environment to the next and across waking sleep like different here. Most states it’s suggests that the primitive here is the,  

Paul    00:59:39    And I’ll leave it to the listener to, um, look at all of the details in all multiple papers, but I have one in particular in mind, but, uh, th you know, you have a model, uh, which shows and accounts for, um, how grid cells essentially put the, uh, generate the scaffolding for the place cells, uh, which are the what’d, you call them the scaffold network. And, uh, and then, and then you have, uh, external data coming in through, through the cortex, also coming in to the, uh, play cells in the hippocampus. Uh, and so it’s kind of like a three modular, uh, network neural network, um, that accounts for these things. So it’s, it’s a really nice, um, mapping of the different kinds of models onto the different brain areas that I described that, uh,  

Ila    01:00:29    Beautifully that’s right. And the idea here is that like, so why do you need a scaffold? I mean, I guess maybe it’s a relevant question, right?  

Paul    01:00:39    Cause we need your needs, your clothes to dry.  

Ila    01:00:41    You, you, you do need your clothes to dry. You need that clothes line. Uh, it’s just that the grid salts generate a very long quote line. So this gets back to this. It’s like, it gets back to this question of the hot field that works in that capacity.  

Paul    01:00:55    So  

Ila    01:00:56    If the place I was had to do like this, you know, if they had to, through the recurrent weights, according to this hospital, prescription maintain, you know, patterns and memory, then we talked about how you pattern thinking, stored memory, how you run out quickly of capacity and how, when you’re going up compacity, you have this menu cliff where the whole board goes white. Um, right. That was the problem with the cocktail networks. And, uh, so we were still left with this puzzle of how to place all, have all of these like distinct maps, obviously for an environment. Um, if the hospital model doesn’t explain it, hardens it. So I talked with each of these groups on that works like each set of cells that have the same period is itself. Like, you could think about that as a hot field, that work, but like, uh, in that continuous sentence, right.  

Ila    01:01:43    It’s representing this position right. As animals around space. So in a continuous sense, it’s kind of representing a set of a suite instead of pinching a six blocks, right. So stable states, um, that can, you know, if the animal stops moving at a point, the grid cells continue to fire that correspond to that position. So, right. So anything I’m willing either to close his eyes or turn off the lights, those cells will just continue to fire, you know, in other words, representing a memory of where the animal estimated its position. Right? So, so each of the networks is like its own pop that number. That’s how we can think about it, but then there’s all these discreet modules of different scales. Right. And so each of them, like each of them has an neurons, um, and each of them has N states, but each of them can, you know, uh, you know, be in some state independently of each of the other modules, but it’s uncoupled modules. So now you’re speaker, you’re talking about something like if they’re, you know, AMA modules now you’ve got like, you know, N times, N times end times. And, um, you know, distinct states,  

Paul    01:02:47    High capacity,  

Ila    01:02:49    It’s high capacity. Exactly. You’ve got a capacity that’s exponential in em. So you’ve got enter the M state. And so now it’s exponential. These are all like stables tickets, right. I mean, each of the, in each module, the states are stable. Um, and then, and then, and then, okay, so that’s, that’s, that’s the idea. So you’ve got like a very large libraries or large closed line, if you will. And then now I’m thinking that that long closed line now what’s going on is you’re kind of providing this on float line. And then you kind of got, as you moved through the world, you know, space then, um, you’re kind of moving along the clothes line and you’re also getting sensory data. And so you’ve kind of got these like states and the clothes line, and then you’ve got the sensory data coming in and then you campus campuses like the clip, if a campus has cliffs, those, you know, the clothes line to the sensory data and clips it together in place.  

Ila    01:03:42    And so it’s kind of, the states are stabilized by these, you know, large library of stable grid cell states. And then the last piece, which is a bit technical, but I guess it’s just to say that now, if you close the loop and you have good cells go back to play cells, then we can show that as a whole circuit together, those are all now fixed points, like stable fixed. And so the whole thing could be in, you know, a memory system. It can do like pattern completion, um, and, and so on. So that’s the idea of a scaffold member.  

Paul    01:04:12    Nice. So I want to, I know that our time is coming to a near, near coming to a close here, but, um, I was actually gonna ask you about, um, your abuser hockey’s inside out approach to the brand. I’m not sure if you’re familiar with that when I was asking you about your bottom up approach and how those two relate, but, uh, I’m gonna bring him up in a different context here because, you know, he has this, um, idea that basically we have the, um, you know, a preexisting networks that are just waiting to get filled with this story, agree with that viewpoint.  

Ila    01:04:45    Yes. It completely resonates with that viewpoint. And again, this is a beautiful example of, you know, like we said, biologists have these amazingly sophisticated internal models of, you know, the mental word models of how the brain works. And this is just such an example. Like I think you’re just, he’s continuously, he’s a fountain of such ideas. And, and I think that, uh, you know, from the formal side, uh, I think what we’re saying is very much in mind, but you know, his, his idea of like internal states that are, you know, for example, are just the grid states they’re maintained during sleep. So they’re literally internal states that exist independent, like endogenously generated states independent of what the outside world is saying. And they’re just there to be associated with or clipped onto, um, you know, like inputs from the external world  

Paul    01:05:35    And needed.  

Ila    01:05:36    Yeah, exactly. Exactly.  

Paul    01:05:39    Okay. So that’s wonderful. Um, another thing, so I actually didn’t realize that you had written this review that’s I guess, resubmitted to nature reviews, neuroscience, all about the, uh, power of, and recent trend of using the dynamical systems approach, including including a tractor, uh, landscapes, like, you know, we were just talking about with hot field networks and such, um, it’s a really nice review by the way, and I hope it gets accepted soon, but it’s available I think on, is it bio archive or archive? So, um, are we going to talk about every brain process as an attractor landscape, uh, in the next, in the coming years, it feels that way in my biased myopic vision,  

Ila    01:06:23    The latest example I’ve heard about that is, uh, have you heard David Anderson talk about, uh, fly copulation behavior, uh, and the lead up to Population and the lead up to fly copulation is the males fly enters into some kind of a cracker state along widget. It evolves adjust before copulation.  

Paul    01:06:46    That’s just a punchline waiting to happen.  

Ila    01:06:48    Exactly. It really is.  

Paul    01:06:52    So you’re telling me the attraction happens right before. Yeah. Sorry. I had, I had to, I’m sure that joke has already been made. So,  

Paul    01:07:07    But, but I mean, is that the way you view it, that everything is just going to be so a lot, like I’ve had, um, uh, David Barak, for example, on the show? Well, many people, um, Jon Krakauer, who, you know, they, they wrote a review on this and I guess there are multiple of these, but John in particular thinks that like these attractor landscapes, these manifolds are real entities that are a nice in-between stage between describing something at the mechanistic, single neuron or population level and the behavior or the, or the cognition that we’re interested in. I mean, do you agree with that view and or how do you, how would you describe your view of that?  

Ila    01:07:43    I do. I do agree with that view. Um, uh, I think that’s exactly right. It’s like an abstraction, uh, where we can leak from both ends. Right. And, and you had asked earlier, like top down, bottom up, and it seems like a cracker dynamics or something right in the middle where we know how to connect on both ends. Um, and I think that to the question of, you know, is everything going to be understood in terms of attractors not clear? I think the jury is out. I think that the place where attractors have to be involved is when are you on persistent activity? Um, it’s, it’s gotta be attractors on the level of either single neurons, um, or circuits, right? If you know, to persist, you’ve got to have an in by default almost. It’s almost total logical to say so, but you know, you need a state that is stately maintained.  

Ila    01:08:31    And the way we understand, uh, the emergence of stability in a circuit is six point that is, you know, fixed point Dianetics and that is an attractor. So I think that every time we’re looking at persistent activity, you’re going to see attractors. It might also be that even when we’re talking about like weight changes, right? So, um, you know, the question about LTP, the fundamental question of how learning and memory is localized and synopsis, you know, change and remodeling, uh, even for instant apps to persist requires, you know, some kind of, you know, stabilization of its state, because how does the synopsis know what size it should maintain over time as <inaudible>? So I think even there then maybe something like hacker dynamics ultimately on the level of, so anytime you’re talking about persistence over time, I think there has to be stuff, you know,  

Paul    01:09:22    Let me ask you about, um, uh, so the way that these landscapes are calculated these days is using spiking rates from neurons. Uh, and you actually, uh, talk about in the paper, how this is one scale, one, uh, temporal and spatial scale, uh, to view this, but how it may be it’s enough, but you know, that there are other processes, neuromodulator processes, uh, et cetera, Gloria, et cetera. Um, and I’ve been in conversations recently with, um, a few people who are excited about astrocytes and cognition, and they do these calcium signaling, but then it’s, you know, so then you wonder about, are there attractor landscapes among the glioma and how would you cross levels with neural activity and glial activity and slow neuromodulators? Like, is there a way to form, uh, dynamical landscapes, uh, across levels like that? Or do you think that we’re going to be always contained in one S you know, one, one level?  

Ila    01:10:18    I think there’s gotta be an interaction with me level of, uh, I think that most of our mathematical models, you know, typically have, uh, you know, they usually assume that this one biophysical timescale, and then, uh, to extend it, you get these circuits that have positive feedback and prolong, you know, and, and catalog decay and cologne the state, um, by, you know, by, by, by recurrence of reverberation. And, um, I think that, uh, uh, but I don’t see, you know, why they may not, you know, I mean, it seems like there could be major benefits of having some interaction with, you know, different fields in time. I, one of the problems with the tractor networks is that they are, you know, they tend not to be very fast responding, right. If you’re creating this long scale timescale and also sluggish, like I have to give a huge input and you have to like, you know, give, you know, yeah.  

Ila    01:11:10    You have to hit them hard to change their state. And, you know, I think it’s a bit of an open question, how much, like, if you have multiple timescales across the seeds, whether you could, you know, have a bit of both like some rapid responses, as well as, you know, slows instability. Uh, and so, uh, yeah, I think there’s so much to do on the mathematical side. We haven’t touched on the questions of like, you know how, I mean, I think you made the point that you could think about at practice is going to be this middle level of description, thinking about option cognition. And so I think, I just want to mention for sure that this is beautiful line of work from, you know, that Buffalo game of Tang, Tim Barron’s colleagues showing that some of these same representations for, you know, for space. So we talked about how the hippocampus is involved in like general memory, but then it’s, but then it’s also relevant spatial memory. And then now we’re going back the other way, which is they’re showing that the same spatial circuit actually modulate their firing in the same ways that they do with animals navigate in real space. But now when the animals are actually navigating in some conceptual tasks space,  

Paul    01:12:16    Yeah. I had Tim on and we talked about stretching birds, and That was earlier when yeah,  

Ila    01:12:24    That’s right. So what  

Paul    01:12:27    Are the here, here’s what I want to ask you is, um, do you think that the, that this approach, the dynamical systems approach will give us insights into our subject subjective experience? And I want to pair that with the question, whether the manifolds are always going to be low dimensional enough, or, you know, well enough in air quotes for us to, you know, understand them right. Or will it be, will there, will there be a cognitive processes that are too high dimensional for the, uh, dynamical systems approach to be useful? If that makes sense.  

Ila    01:13:02    Yeah, I know. I mean, brilliant. Yeah. Well, on the first one, I think that, yes, I think we can, uh, you know, even understand like pretty high level of cognitive processing in terms of some of these, you know, attract landscapes and attractor dynamics, I think already, you know, uh, things like attention, you know, we can think about attention as being a winner, take all dynamics between, you know, multiple different potential targets of our attention and focus in one, uh, you know, there’s like, you know, a lot of perceptual effects, like conscious perception of like binocular rivalry, where you have to stimuli and you retinas in both of them, but you only consciously able to perceive one time. Right. So I think that, you know, even low level advice, the ability to manifest very quickly at the high, you know, consciousness year ago kind of report perceptual report levels.  

Ila    01:13:51    So I definitely think so. Uh, and I think those examples are all around us. Like, I think that, I think that right. I think that, uh, or maybe like, you know, uh, I guess, you know, the, the essay, the reasonable effectiveness of mathematics, uh, says that, you know, we’ve got only one language, uh, to describe the natural world, which is mathematics. So maybe it shouldn’t be a surprise that we translate everything in terms of the mathematics that, you know, so I don’t know, but I, I feel like philosophically, um, yes, the, the dynamical systems attract your perspectively. Does that explain a lot of like the pretty high level conscious percepts? Um, uh, I don’t know the answer to the second one, whether the manifolds that exist in our brain are sufficiently low dimensional for us to understand them. I mean, I think that like mathematically, we don’t have ways to understand higher dimensional manifolds.  

Ila    01:14:47    Like once we get into six, seven dimension, eight dimensions, you know, we can do things like topological data analysis. We can characterize things like how many cycles and how many rings and how many loops and how many, you know, um, voids of higher dimension are in this manifold, but putting it all together to construct the whole metaphor. It’s more like, it’s like we would be doing like the piecewise characterization. Like if an elephant we’ve got, it’s got four legs and it’s got a tail and it’s got a head, but how do we put all those together? Can we put that together in the right order to understand that it’s an elephant? Or is it more like a Picasso, you know, Cuba’s perspective on an elephant, would you just join together and what our prospects of the brain fatigue are dimensional manifolds of the fundamental units of representation? Um,  

Paul    01:15:36    Open question, new math. Yeah. Always new math. Thanks. Thanks. Thanks for doing this. I know that you have to go, but we didn’t even talk about, uh, you know, how you guys mentioned that this same approach is going to is already being used more and more in AI to understand neural networks, artificial neural networks, et cetera. Um, so we’ll have to leave that for next time and the host of other questions I wanted to ask you, but I very much appreciate your time. Thanks.  

Ila    01:16:01    It was a pleasure. Thank you for having me on  

0:00 – Intro
3:36 – “Neurophysicist”
9:30 – Bottom-up vs. top-down
15:57 – Tool scavenging
18:21 – Cognitive maps and hippocampus
22:40 – Hopfield networks
27:56 – Internal scaffold
38:42 – Place cells
43:44 – Grid cells
54:22 – Grid cells encoding place cells
59:39 – Scaffold model: stacked hopfield networks
1:05:39 – Attractor landscapes
1:09:22 – Landscapes across scales
1:12:27 – Dimensionality of landscapes