Brain Inspired
Brain Inspired
BI 141 Carina Curto: From Structure to Dynamics
Loading
/

Check out my free video series about what’s missing in AI and Neuroscience

Support the show to get full episodes and join the Discord community.

Carina Curto is a professor in the Department of Mathematics at The Pennsylvania State University. She uses her background skills in mathematical physics/string theory to study networks of neurons. On this episode, we discuss the world of topology in neuroscience – the study of the geometrical structures mapped out by active populations of neurons. We also discuss her work on “combinatorial linear threshold networks” (CLTNs). Unlike the large deep learning models popular today as models of brain activity, the CLTNs Carina builds are relatively simple, abstracted graphical models. This property is important to Carina, whose goal is to develop mathematically tractable neural network models. Carina has worked out how the structure of many CLTNs allows prediction of the model’s allowable dynamics, how motifs of model structure can be embedded in larger models while retaining their dynamical features, and more. The hope is that these elegant models can tell us more about the principles our messy brains employ to generate the robust and beautiful dynamics underlying our cognition.

0:00 – Intro
4:25 – Background: Physics and math to study brains
20:45 – Beautiful and ugly models
35:40 – Topology
43:14 – Topology in hippocampal navigation
56:04 – Topology vs. dynamical systems theory
59:10 – Combinatorial linear threshold networks
1:25:26 – How much more math do we need to invent?

Transcript

Carina    00:00:03    Physics has shown us that doing mathematical analyses on simple models is a very, very fruitful game. And I, and I strongly felt and continue to believe that it’s a fruitful game to play in neuroscience as well. The perspective is that somehow you have modules, you know, sub networks in the brain that do things right, and you wanna, you wanna understand how they can be embedded in the larger network. So let’s say they can still do their thing and not be kind of destroyed in terms of the activity they produce, by the way that they’re embedded neural networks fundamentally are high dimensional non-linear dynamical systems and high dimensional non-linear dynamical systems are a nightmare mathematically. <laugh> it’s really hard.  

Speaker 0    00:01:00    This is brain inspired.  

Paul    00:01:13    Hey everyone, it’s Paul, often on this podcast, we talk about how deep learning models might be used to help explain our brain processes and intelligence, but a continuing obstacle is that we’re using one complex system to study another. We don’t yet have a great grasp on how artificial neural networks are doing their thing. So it’s an ongoing question. How we’ll use them to explain how wet neural networks our brains do their thing. Karina Curto is a neuro theoretical mathematician at Penn state university. I just made that phrase up, but what I mean is that she applies her background skills in mathematical physics to study theoretical neuroscience. One of the mathematical tools she uses is topology, which in neuroscience is roughly the study of the theoretical geometric shapes. We use to say something about the possible range of activity of a population of neurons. So we’ve discussed the dynamical systems approach multiple times on the podcast.  

Paul    00:02:14    Things like how neural population activity can often be reduced in dimension and traced out over time on some lower dimensional structure, like a manifold or a tractor. And that manifold is a way to think about the shape of cognition. So to speak. Topology is the study of those possible manifold like structures and the rules by which neural activity is allowed to unfold. It’s a pretty new tool in neuroscience. So we discuss it during this episode. Another mathematical tool Karina uses, uh, are called linear threshold networks. And in her case often a special case called combinatorial linear threshold networks. I’m sorry, that’s jargony, but these are highly abstracted neural network models. That importantly are mathematically tractable, meaning you can use proofs and so on to conclusively state, how different architectures of the model lead to different dynamic features of the activity that the model will produce.  

Paul    00:03:14    You can look at a structure and know what dynamics will result an age old challenge in neuroscience. And now deep learning I suppose, is trying to connect function with structure. So it’s possible. These C LTN networks are a step toward bridging that gap by connecting structure and dynamics. So we talk about topology, uh, these C L TNS, which Karina always likes to stress is work that she’s done with Katie Morrison and wider topics. Like the differences between different kinds of modeling approaches. The reason why people with physics backgrounds are specially relevant right now in neuroscience, how we can all remember Jenny’s phone number eight, six, seven, five through, okay, I’ll stop show notes are at brain inspired.co/podcast/ 141 on the website. You can also check out my neuro AI online course about many of the topics you hear on the podcast, or decide to throw me a few bucks a month through Patreon with various bells and whistles. Both of those options are of immense help to me learn more@bninspired.co thanks to everyone who supports the podcast. All right. Here’s Karina. Karina, have you always been interested and enjoyed math?  

Carina    00:04:32    Uh, yes, I think so. Um, although if you had asked me when I was younger, like in high school, or even when I started college, if I would, uh, become a mathematician, I would’ve said absolutely not because, uh, that’s what my father did. And so it wasn’t, it wasn’t cool. <laugh> so I was gonna be really different and become a physicist. <laugh> that was my plan, right.  

Paul    00:04:56    Originally. Yeah. We were just talking about your background in mathematical string theory. Mm-hmm <affirmative> so, which, which is, uh, what you got your PhD in, you were telling me  

Carina    00:05:04    Long time ago. Yes. In 2005.  

Paul    00:05:07    And then you had a decision make to make to, uh, what was the decision you didn’t tell me what the decision to make was?  

Carina    00:05:13    Yeah, I actually made the decision before I finished my PhD, so about halfway through grad school. So I went to grad school, first of all, um, completely focused on this idea of becoming a strength theorist. So I had been an undergraduate, um, at Harvard and physics, but I had started taking lots of pure math classes at the time. Uh, the environment at Harvard was very surreal in a way, because doing pure math was really considered like very cool <laugh>. And so I got <laugh> increasingly interested and, and, and, uh, also because I wanted to be a theoretical physicist, it made sense to take as much math as possible. And then I ended up going to grad school in math, but to do mathematical string theory, um, at duke and was very, very much committed to it. Um, and then a few years in, I just realized my, my heart wasn’t in it in a very serious way.  

Carina    00:06:08    Like I, I loved the mathematics, you know, I still loved kind of the fundamental ideas of string theory. Like I do think it’s a lovely theory in many ways and, you know, someone should keep pursuing it <laugh>, but, you know, I didn’t, it was, it was at that point, it was sort of like, uh, this was in the early two thousands. It just, it felt like to make a significant advance felt out of reach for a variety of reasons. Anyway, so the point is I, I became, uh, dissatisfied, disillusioned, depressed. I don’t know what you wanna call it. I had a little bit of a like quarter life crisis, I guess, and just kind of didn’t feel like this was right for my career. I still wanted to finish the PhD in spring theory, which I did, um, in mathematical strength theory. So I, I, I finished my PhD, but I started looking for, what am I gonna do next? So I, I very much took on this attitude. Okay, I’m gonna do this for my PhD. I’m gonna write a nice thesis. I’m gonna publish it. I did all that, but then I’m gonna switch fields and I made the decision that I’m gonna switch fields, like probably in the middle of my third year,  

Paul    00:07:11    And then just kept the grit to stick it out and  

Carina    00:07:14    Then just kept the grit to stick it out. And, and, and I also had to figure out <laugh> what I’m gonna switch to <laugh>. So I needed a little bit of time  

Paul    00:07:22    <laugh> well, I wanna, I wanna ask about that too, but did you get a, and I told you, so from your mathematician father, as soon as, uh, or, or did you not confide in him that it was like the mathematics?  

Carina    00:07:31    Oh, I, I, I kind of kept this all secret. <laugh> okay. Yeah. I was kind of lurk. I was looking for something else. I think in my fourth year, the beginning of my fourth year of grad school, I actually sat in like, on an economics course, uh, because I was like, oh, maybe economics will be interesting and sing. And that was intolerable <laugh> because I, at the level, the level of economics that I had, I needed to sit in kind of on an intro undergrad cores, but the intro undergrad cores, like spent the entire lecture, you know, talking about vector calculus or, you know, just very, you know, math that I was teaching as a grad student. Um, and so it was <laugh>, you know, there wasn’t a good level. There was no, like, you know how there’s like, you know, there’s always field X for dumies. Right. But there also needs to be field X for people who know a lot of math and don’t want the review.  

Paul    00:08:20    Yeah. But neuro neuroscience is not that field.  

Carina    00:08:23    No neuroscience is not that field, but I, I was just saying why the economics class didn’t work for me. Um, okay. So it was just, yeah. In some ways maybe sitting in on a neuroscience class was much more appealing because there was no math. So I wasn’t bored with the math <laugh> it was just like, it was all biology. It  

Paul    00:08:40    Was all new. That hurts. That hurts, right?  

Carina    00:08:42    Yeah. Well, I mean, in the undergrad, no, I’m, I’m obvi there’s a lot of math in neuroscience actually. I mean, I, I don’t mean to otherwise I wouldn’t, I wouldn’t be here, but the intro level neuroscience courses at duke. Right. So I, I sat in, on, on an intro level neuroscience course and loved it. It was totally different. And in a way it was all new because I had avoided biology since ninth grade <laugh> in high school stamp,  

Paul    00:09:05    Collecting at the it  

Carina    00:09:07    Stamp, collect. Exactly.  

Paul    00:09:08    But did you, when you were sitting in that course, then with your heavy math and physics background, did you just immediately see an opportunity to enter the field because oh, these people, they, they don’t know what they’re talking about. Uh  

Carina    00:09:21    <laugh> no, no. I mean, it was, that was not my feeling at all. My feeling was like, wow, this is so cool. You know? And I didn’t, you know, it just, it was more just like entering a new land, you know, and just seeing I was like a tourist. Right. And I was like, wow, there’s all these beautiful, all this beautiful structure, all, you know, who doesn’t wanna know how the brain works. It just seemed very exciting. Um, and I didn’t, I didn’t, you know, see much math in it. I mean, I think maybe the Hodgkin Huxley equations were like mentioned in passing, you know, like  

Paul    00:09:53    With the, with a Harrah, we did it Hodgkin, Huxley result the brain. Yeah.  

Carina    00:09:57    Right. Exactly. Um, but no, I just, I, cause I really didn’t know anything about biology. I mean, I was very, very ignorant. Uh, and so to me it was like learning biology for the first time at age 25, you know? And I, I recommend it actually, because by the time you’re 25, you really appreciate it. <laugh>, it’s like, wow, all these cool things. And you’ve lived long enough that, you know, you have a brain, right. And it’s kind of fun to, to learn about how it works. Um, but all the things, you know, the ion channel channels, the synapses, the, all the various structures, you know, it was blown away, just simple things, you know, that you might, everybody in neuroscience knows from when they were babies. Right. Like, I, it was all new to me. I didn’t see necessarily potential for myself in there, but it was just kind of fun, um, much more fun than the economics class <laugh>.  

Carina    00:10:51    But then, then <laugh>, it just happened that, uh, duke was running, uh, like a computational or theoretical neuroscience seminar series. And so I went, I went to some of the seminars and I, I distinctly remember Larry Abbott coming down and giving a talk, uh, about vision, but he very much brought, you know, like he does in so many of his talks, he very much, uh, brings to the forefront, the perspective, his perspective as a physicist, it was such a lovely talk. And it was like, it was the first time I had ever seen anything like that. It was the first time I had seen a physicist, someone who’s, you know, obviously a theoretical physicist by background talking about neuroscience. And that combination made me see it, you know, made me see myself as being able to do this. I kind of started paying more attention. Um, and then this crazy thing happened.  

Carina    00:11:49    <laugh> uh, there was a kind of a young professor at Rutgers, uh, named Ken Harris. He’s still around. He’s a very prominent, um, neuroscientist, um, in London now at ECL, not so young anymore, not so young anymore. <laugh> neither am I <laugh> am I? Yeah, none of us are young anymore, but, but at the time he had just, I think he had had a very successful postdoc in the BJA lab and he was starting up his own lab at Rutgers and he was recruiting. He was trying to recruit postdocs and grad students. And he himself had a background in math and physics and had actually started grad school, uh, to do strength theory. Oh, okay. Yeah. So it’s full circle. Yeah. Right. And so I think that’s why he gave me a chance. Um, but he started spaming math departments, like literally sent email spam to math departments saying by the way, I’m starting this lab, if you’re, you know, if you have strong quantitative skills and are interested in neuroscience, you know, get in touch, you know, at the time I had, I had just gotten married, actually my new husband at the time was also kind of having Vladimir Scott was also having a, a little bit of a crisis.  

Carina    00:12:58    I mean, he was doing differential geometry as a post and had also kind of lost interest in, uh, the problems that he was working on. And, and together we sort of answered the call <laugh> and, uh, and wrote to Ken. Anyway, he ended up hiring us both and he ended up by, you know, the summer before my fifth year of grad school, he had promised me a postoc position for after I graduated and I accepted it before I started my last year. And that was wonderful. It was amazing because suddenly I had this really serious opportunity to learn some neuroscience and to do it in a lab that, uh, was being run by someone who, you know, understood my background. Right. Someone who really had, had kind of a similar background himself. Yeah. So I kind of, I committed to that postdoc and then I started my fifth year and I still hadn’t, I still hadn’t told anybody.  

Carina    00:13:53    And, uh, I remember having this conversation with my advisor where he was like, oh, so it looks like you’re gonna graduate this year. It’s time to start thinking about jobs. It was like in September of 2004. And I was like, oh, I already have a job. <laugh>. And so that’s when I <laugh> <laugh> it was really weird. Uh, I felt, I felt like I was confessing to cheating on him or something, you know, with this other field. Um, but yeah, so I, I, I, I told him that I had already gotten a job and that I was gonna do, I was gonna go to a neuroscience lab and I think he was very surprised, uh, but ultimately very supportive. Um, yeah. And then I was, he made it easy cuz I didn’t apply for jobs. I didn’t have to go through that stress of going on the job market, finishing grad school and so on. So I just focused on writing up my thesis and learning as much neuroscience as I could, um, to get ready for my new postdoc and, you know, kind of came in to hit the ground running and, and went from there. So that was, that was my, that was a very long version of my, uh, decision and how I switch, but  

Paul    00:15:01    There’s a lot to chew on there, but, and we don’t have to perseverate on this, you know, very long, first of all, I wonder how many people were affected by Larry Abbot specifically in this kind of regard, because I don’t know if you have the same, uh, observation or perception, but that, it just seems like the, the physics background world has infiltrated neuroscience over the past 15, maybe 20 years. I, I don’t know a is, do you think that that’s true, but B you know, the, the physics mindset sort of lends itself to, uh, a lot of what at least these days is studied in computational neuroscience anyway, but then thinking about, I, I don’t know if we wanna talk about relative to string theory in particular, but just PHY, maybe physics writ large. Do you see the, the complexity of studying something like the brain as also naturally lending itself to the physics kind of background and mindset, or because string theory, to me, it seems much more like, you know, reductionist, individual <laugh> particles and equations that are, that don’t necessarily feedback on each other, which is a very different story relative to something as quote unquote complex, like a complex system, like the brain.  

Paul    00:16:17    I’m wondering if, well, I guess just your thoughts on that.  

Carina    00:16:20    Yeah. So, so first of all, I think a lot of, I mean, I absolutely think it’s true that physics has had an enormous physicists have had an enormous influence on theoretical neuroscience. And, you know, I would say maybe half of the theoretical neuroscience communities, people with, um, physics backgrounds, maybe more than half it’s hard to tell  

Paul    00:16:38    Part of that is because neuroscience was not its own, you know, there weren’t neuroscience departments, you know, in so many universities back be back 20 years ago. Right. Right. So they, everyone had to come from somewhere.  

Carina    00:16:49    Right. Everyone had to come from somewhere. And so I think, I mean, I think on some level neuroscience itself is a little bit like a, a land of immigrants. You know, I think people in neuroscience come from all kinds of backgrounds, whether it’s physics or math or computer science or engineering, or on the other end psychology, you know, sometimes philosophy, sometimes molecular biology. Uh, so I think there’s a, there’s a huge spread of backgrounds in neuroscience period. You know, not even, I’m not, not even just thinking about the, the more theoretical or mathy. And on the other hand, I think, you know, you might say, well, why are so many physicists, there are other quantitative fields, right. And there’s math and economics, right? Why are there more physicists than there are, um, mathematicians or economists? And I think what’s special about the physics training. It is really a scientific training.  

Carina    00:17:41    Fundamentally. I mean, physics is a science. It, it does deal with, you know, the real world and trying to understand rules for how the world works. Um, but it’s also of all the sciences. I think it’s safe to say it’s, it’s the most mathematically sophisticated. And so every physics student learns a lot of math, every physics student, um, you know, no matter what kind of physics they’re doing, uh, there’s, there’s just a huge amount of mathematics. That’s, that’s in the training. And so physics students, uh, end up learning a lot of science integrated together with mathematics. So it’s like naturally integrated. And I think that’s the preparation that is so useful for something like computational neuroscience. It’s not, you know, so much the, the physics content per se or the math content per se, but it’s somehow the, that integrated approach. It’s like physics is just naturally this field that is developed in a way that there’s this integrated approach to science and mathematics where every time there’s a physical phenomena, you wanna understand, first thing you do is start trying to write down some equations, and this is sort of the beautiful thing about it, right?  

Carina    00:18:43    So there’s natural phenomenon in the world. There are equations you can write down and this includes complex systems, that idea, right? That you can take some messy, you know, real world situation, maybe idealize it a little bit or a lot <laugh>, uh, and then write down some mathematical equations and then you’re gonna do some math and you’re going to analyze those equations. Maybe even see how the behavior depends on various parameters in the equations, you know, push and explore scenarios that are not exactly like the one that you’re initially trying to describe. But you looking now at a family of problems, that whole game, right, where you take something about the messy real world, turn it into equations, play with the equations, you know, study them, really understand them and then feed back into some insight about what’s happening with the original physical phenomena. I think that’s the training that a physicist gets, uh, that is extremely useful.  

Carina    00:19:45    The other thing that helps is that the physics job market sucks. <laugh> if you wanna be an actual physicist, that wasn’t really the job market that I was a part of. I was sort of more on a math track, which, which is a little better actually. Um, wow. But that combination, I think of very good training to go elsewhere, but like a very bad market to stay in, you know, precisely physics, uh, I think has led to this proliferation. It’s almost like, you know, just to use these analogies, right. <laugh> of, of, uh, land of immigrants. It’s almost like, um, physics is like the Soviet union, right? It was like very good, well trained people, but like no jobs for them. And neuroscience is like America. <laugh> like just welcoming, uh, you know, people from all backgrounds. And anyway, uh, don’t wanna, we can’t don’t wanna say much,  

Speaker 3    00:20:36    No nos. I mean,  

Carina    00:20:37    This is not the moment. This is not the moment to make that kinda analogy. But, uh, but anyway, I, I think there’s, there’s some something to that.  

Paul    00:20:45    So the word beauty is often invoked by physicists and by mathematicians. And in 2013 you wrote, it’s a very short piece mm-hmm <affirmative> um, and, uh, I’ll read the title, a major obstacle, impeding progress in brain science is the lack of beautiful models. And I, after our, our brief conversation here, I realize a, someone of your background who got, went into let’s say economics or something like that might just replace the phrase, brain science with whatever field they go into. Right. <laugh> so, but, uh, so I mean, I, I, I guess you’ve kind of covered this already a little bit, but maybe you can restate, you know, at the, so at, at two that was 2013, has that changed or, or has your thought about that statement and, and the very brief piece which I recommend and I’ll link to it, you know, has it evolved, has, as you’re thinking about it evolved, do you think it’s changed in neuroscience? And then I just wanna talk about what an ugly model is and what a beautiful model is.  

Carina    00:21:43    <laugh> sure. Uh, so let me just tell you the context of that piece. So this was in the context of the Obama brain initiative. So back in like 2012 or whatever, there was this, uh, developing, it hadn’t really started yet. Um, this brain initiative, uh, was put forward by the Obama administration. It was being led by Terrance Sinski and, and, and others. Um, and they, uh, had a workshop just like an early workshop where they kind of gathered people from different corners of neuroscience to discuss what we were gonna do in this brain initiative. And so at the time I was, you know, kind of a young assistant professor and, um, but had Al you know, already had a little bit of work that I had already done in, in computational neuroscience or theoretical neuroscience. And so, you know, along with just like a handful of other, um, mathematicians, I was invited to be a part of this, but it was really a big group of people.  

Carina    00:22:42    It was, you know, maybe, maybe over a hundred people, uh, and all of the participants were asked to write a one page white paper. And with that question, like, what do you think is the biggest obstacle, uh, advancing progress? Right? So, so everyone wrote a one page paper. So I decide so that, so I, like, I just wrote that in one day, you know, I like had to submit it and I just thought, well, you know, I’m gonna take a very mathy perspective here. And this is, this is what I think is something to pay attention to. I kind of knew nobody else would highlight the same thing. So, um, so that’s where it came from. And when I meant by beautiful models really was this idea that I was sort of describing from, from physics before is models that you can, you know, that are abstractions in some sense, or dealing with a simplified, uh, version of the, of the real life situation, uh, of the real life phenomenon, but are kind of simple enough that you can analyze them mathematically and get some traction out of that.  

Carina    00:23:46    Um, but rich enough, that they really do capture something fundamental about the phenomenon, so that when you do some, some mathematical analysis, you actually can gain some insight into the original scientific phenomenon. So, you know, the example is kind of that I had in my head and that I referred to in that white paper, um, were things like the hop field model, which mm-hmm <affirmative>, um, uh, for neuroscience was, you know, in, in the eighties, kinda led to a bunch of physicists getting involved in the field. And, um, yeah, and trying to, to see if they could study recurrent networks and, and brain dynamics and so on, um, using this kind of simple model and the, and the, and the advantage of it is really that you can do some math, right? So it’s like, it’s not so simple that any mathematical result is obvious, right?  

Carina    00:24:37    So the idea is you do have some richness to the model. And so if you analyze it mathematically, you might discover something new that wasn’t obvious. It wasn’t, you know, an ingredient that you put in by hand. And, um, but somehow it’s a, it’s a property that emerges from the properties that you do put in by hand. And so that, you know, as a, as a way of really trying to understand how various scientific properties or scientific phenomena interact and how some depend on others and, um, and so on, I think it can, it’s, it’s a very, it’s a very fruitful game. I think, you know, physics has shown us that doing mathematical analyses on simple models is a very, very fruitful game. And I, and I strongly felt and continue to believe that it’s a fruitful game to play in neuroscience as well, but neuroscience is developing in a, a little bit in a different era. So unlike, you know, most of theoretical physics, neuroscience is developing in an age when we have, you know, powerful computers. And so there’s this other pathway as well that you can take with modeling, whereas that now, you know,  

Paul    00:25:52    Is this the ugly pathway  

Carina    00:25:54    Well way? Well, <laugh> ugly is a little bit of a  

Paul    00:25:57    Cause I,  

Carina    00:25:58    I don’t wanna be <laugh>.  

Paul    00:26:00    Yeah. I mean, I wanna  

Carina    00:26:01    Be careful. I wanna be careful cause  

Paul    00:26:03    I think they’re well ugly in the mathematical sense, not in the real world sense. Right. So, or, you know, something like that. Yeah. But you know, the comp, so in computational neuroscience, the first kind of models, right. Well, not the first kind of models, but there’s sort of like these toy models that you can that are kind of simplistic. Right. And I don’t know that you necessarily need to even do mathematical analyses and prove theorems about them, but, but then you can simulate them and get results and then you can kind of build up and, you know, there’s that whole simulation streak, but, but then now there’s all these deep learning models. So that, that’s really what I wanted to ask you about is the, is, are these large deep learning models, uh, transformers, large language models, convolution neural networks, even like, you know, with recurrence, are, are those, uh, ugly models in the technical sense?  

Carina    00:26:52    <laugh> so I, I, I wasn’t thinking about, yeah, no, I know <laugh>, I mean, so by the way, I mean, beautiful and ugly art, obviously subjective terms. And I meant to use them as subjective terms as well. You know, those were words chosen because, because they aren’t particularly objective, you know, and I didn’t wanna sort of overplay, um, my opinion here, but what I was thinking of when I wrote that in 2013 was not so much deep learning, which was not as sensational at the time, you know, it’s, it’s hard to believe, but 10 years ago it was really not as prominent. Um, and what, you know, I was, I was more thinking about very bio physically detailed models. So, uh, there’s another sort of realm of modeling. And, and this is actually kind of, uh, was, you know, seen like in the blue brain project in Europe, and it’s kind of, it’s kind of what spurred the Obama brain initiative was sort of a reaction to a huge investment and that kind of project  

Paul    00:27:50    That failure.  

Carina    00:27:51    Well, it hadn’t failed yet. Right. But, uh, uh, I think there was a, an idea to have an American response, right. And to do a big, you know, brain investment ourselves, but with a different flavor, very different flavor. And I’m, I’m happy for that actually, um, for how it played out. I think it played out much better. Um, but anyway, so at the time that was kind of the other extreme. So it wasn’t so much deep learning, which deep learning is also an abstraction. It’s also in some sense, a very well, we can talk about that later. Yeah. But I it’s, it’s not bio physically detailed, deep learning networks. I was thinking more about the models where it’s like, you take a bunch of Hodgkin, Huxley neurons, except you add all these D righttes and all, all these different compartments. So these multi compartment models for single neurons, for a single neuron, we already have 50 or a hundred parameters.  

Carina    00:28:42    And then you put many of them together, um, into this large scale network. And what happens in a model like that is that in some sense, you have parameters that are interpretable. So you can say, oh, this GSA, I is the value of the conductance of this particular, um, ion channel on this particular neuron, you know, so you have all these different parameters, you know, and you have, you can, you can put together lots and lots of neurons and a network and, um, add different kinds of inhibitory cells. And so there, there’s a, there’s a lot of richness that you can build in. I would say, uh, it’s built in with a lot of guesswork because, uh, <laugh>  

Paul    00:29:26    Right. Ranges of values, parameter  

Carina    00:29:28    Ranges of values. And, um, and it’s the kind of thing that a hundred years ago, nobody would’ve done because it would’ve been impossible  

Paul    00:29:38    Computationally.  

Carina    00:29:40    Right. You couldn’t run a simulation and that’s the only thing you can do with a model like that is run a simulation. Right? Yeah. And so, you know, it’s not a bad thing that we can run simulations. I do simulations all the time. I think they’re very important. Um, they’re a very important tool, but in some sense, because we have this extraordinary ability to simulate large sets of differential equations, um, on computer, we have this other choice in modeling, right. We have this other choice in how to develop computational theories that wasn’t available when physics was being developed, you know, or in sort of like theoretical physics was really cementing itself and establishing itself as a field. They didn’t have this. Um, I actually have, have, uh, brought up the thought I’ve, I’ve come up with this thought experiment before where I was like, what would’ve happened if back in the beginning of the 20th century, right before it was even clearly established that Adams existed, right.  

Carina    00:30:38    Um, way back then when people were sort of laying the foundations for things like statistical mechanics, what would’ve happened. If we had had the ability to measure individual atoms in a gas and track them with position and velocity over time and, you know, and have this huge database where it’s like, okay, here’s my guess. And here’s like a hundred million atoms. And, you know, here I’m like I I’m tracking all their positions and velocities over time and, and seeing how they interact. And now let me build like this giant deep learning model or, or, you know, detail, bio, physical model, physical model, not biophysical physical model,  

Paul    00:31:17    Unless they Don have fever. Right.  

Carina    00:31:19    <laugh> right. Like, imagine that we could have done that, that we could have had like Gaso mix, you know, and just done this like large scale, like gas modeling would that have really been better than the way thermodynamics and statistical mechanics developed, you know, obviously is a different problem. And, and somehow physicists got lucky that they were able to get so much mileage out of, um, simple mathematical models. But I do think that that sort of not having the, that ability to do the large scale, computational modeling allowed a very different style of modeling to develop,  

Paul    00:31:54    Oh, now you’re calling deep, deep learning practitioners. Lazy. I see,  

Carina    00:31:57    I see what it’s no, no, no, no, no. I’m saying, I’m saying no, I’m the opposite. Right? Like in some sense, it’s like, that’s, that’s the hard work, right. And it’s like, I think physicists, we’re forced to take shortcuts, um, and you know, dramatically reduce dimension and, uh,  

Paul    00:32:14    Idealize  

Carina    00:32:15    Abstract and idealize and abstract. I think they were for it. What I’m saying is for them, it wasn’t a choice. Right. They were forced to package things in ways that then they could sort of do mathematical computations with pen and paper. Um, because that was the tool that they had. They didn’t have, you know, super computers that could run huge simulations. Um, and so I think that, yeah, there wasn’t really a choice there. Now we have a choice. And so, and I, and, you know, as with most science, I think when there are choices of different ways to do things, you know, I’m happy for, for all the choices to be taken, you know, like there’s a community, right. So different people can make different choices and we can explore, um, for, or, um, problems in different ways. I mean, you know, I’m not against those choices, you know, I just, but I, but I think that there’s a danger of going too far in the direction of like the large scale computational modeling and then forgetting, um, that there’s also a lot of mileage that potentially could be had by, you know, simpler models that can be analyzed mathematically.  

Carina    00:33:20    And that’s what the sort of the beautiful models concept that I, I kind of wanted to promote at the time and still, you know, I still think that that’s, it’s just a different pathway of, um, trying to do theory and, and I think that it can coexist alongside the other ones. And, uh, you know, I just don’t want that one to be forgotten. And that’s the one that, that I, you know, that I feel like I can do well. Um, and you know, and many people with, with physics and math backgrounds can do. And I just, I, I, I was hoping, you know, with the brain initiative that the brain initiative would carve a space for that. That was kind of my hope and, and I think it has actually. So, um, and I think that, I think there are, there are a lot of people who, who are, um, continuing, I mean, a lot of people with physics training, especially because that’s what they’re trained to do, who are continuing to do that. And in addition to the large scale modeling, and I think the, sometimes the same person is doing both. Right. So, and that’s, um,  

Paul    00:34:23    That’s interdisciplinary.  

Carina    00:34:25    Yeah. It’s interdisciplinary. Exactly.  

Paul    00:34:27    My, my, so my summary of what we just talked about is, uh, you got the beautiful model community, which you’re a part of the ugly model community and the lazy commun, uh, model community. That’s, that’s my take home. Is that now I know him  

Carina    00:34:40    <laugh> yeah. I don’t wanna be on record. I’m calling anyone the ugly model community or  

Paul    00:34:44    The lazy model, and then there’s the Russians and the United. Yeah. We don’t have to,  

Carina    00:34:48    No, I <laugh>. I’m really getting myself in trouble here and no. Um, no, I just, I think there are different approaches. Yeah. So deep learning is another approach that is emerged after that is emerged as being very prominent in neuroscience. I think after I wrote that, yeah, that’s kind of going in a different direction cuz on the one hand they are also very abstract models. Um, deep learning networks are very simple. It’s really just like composition of simple functions is what a deep network is doing. Um, but it has a lot of parameters. So it’s kind of the simple model in terms of the framework, but really complex in terms of parameters. Um, and so, yeah, I don’t know. I don’t know if it’s beautiful or ugly <laugh>  

Paul    00:35:33    <laugh> yeah, we, no, we can, we can drop that. I just wanted to you a little bit about it. Mm-hmm <affirmative> so we we’ll kind of come back to that. And maybe when we talk about the, um, commod auditorial linear threshold models, which is part of what, uh, you’re working on be before we get there though, um, another line of, you know, mathematical analysis that, uh, applies or is beginning to apply more to neuroscience is topology. And, but the way that I have been talking about, you know, like your, your math and physics background, right? It’s not as if math and physics freezes and it, we, we figured all math out and we figured all the physics out and now we can just start applying them because they’re both developing also. And I know that topology is a developing mathematical field as we speak still. Right. But, uh, you have argued that it it’s useful as an application to neuroscience as well. So what is topology? And it might be useful to just relate it to the dynamical systems theory, you know, compare contrast essentially since that’s another like very popular, uh, theoretical approach to, um, the geometry of thought and neural activity, et cetera.  

Carina    00:36:41    Yeah. So I mean, topology is basically geometry without distances. So where you’re really thinking more about relationships and, and, and what properties of an object are preserved as you can deform it without, you know, just to go back into the cliches without tearing holes into it. Um, and so on. And, but, you know, I, so yes, as a field, topology is still developing. Um, I mean there’s a topology community in mathematics, but I would say that the, the, so there’s, it’s sort of developing in two different ways. So there’s a traditional topology community, uh, there’s like low dimensional topology, um, that people do. If you, if you heard about the punky conjecture being proven, I think 15 years ago now that was a, a topology result. So there’s definitely, you know, a lot of, you know, there are a lot of open questions. There’s a lot of, um, active work in topology in math.  

Carina    00:37:42    So that’s sort of pure topology, but there’s also an applied topology community that has emerged recently. So applied topology is kind of a spinoff branch of topology that, uh, deals with applications, but, but in particular, it’s, it’s really kind of computational topology in a sense. So it deals with, um, finding ways to, uh, compute topological properties of data sets of point cloud data, for instance, um, but also can be applied to neuroscience data. And so strictly speaking, the, the topological ideas that are in applied topology are in this, this sort of newer computational topology area are like a hundred years old, you know, basic questions like you wanna, you wanna look at features of geometric objects that are kind of independent of precise distances. So if you imagine, uh, a TAUs probably know what a TAUs is. It’s like a bagel, but hollow. And so you have, you know, you can, you can loop string around two different holes that can’t be pulled shut.  

Carina    00:38:53    So algebraic topology is, um, a field that try that has techniques to compute these properties, right? So you can assign to, um, various geometric spaces, properties such as these sort of numbers of holes and different dimensions. Um, so like a sphere, right, has like a two dimensional hole that it bounds, um, which is different from kind of a disc with a puncture inside, which would have kind of a one dimensional hole that it bounds. And so there are these topological and variance that have been defined, I guess, that a hundred years ago to quantify these sort of qualitative features in a way sort of looking at qualitative features of geometric objects, but, but quantifying them so you can count the number of holes or you can count, um, you know, what are called Betty numbers and so on, but when it comes so, so, you know, these are old ideas they’ve been, they’ve been used in strength theory, by the way.  

Carina    00:39:54    Um, so strength theory is definitely, uh, the collab out spaces, which I studied as a PhD student are definitely studied topologically as well. Um, so, you know, topology has been used in other areas of physics already. That’s so it’s, you know, in some sense, applied topology, isn’t new, but computational topology I would say is relatively new. And so the big breakthrough that was made, you know, starting, I don’t know, around 2025 years ago is when this field kind of started and it’s really taken off in the last 10 or 15 years, um, was to develop, you know, with solid mathematical foundations algorithms that would compute these topological properties for geometric objects that were defined via data. Right? So if you have a bunch of points now, like you no longer have kind of this clean mathematical description, say of the TAs, but you’ve sampled a bunch of points and maybe they’re in this high dimensional space and you wanna know, oh, does that data set have the form of a TAUs?  

Carina    00:41:04    How would you figure that out? I mean, on some level it is just a bunch of points. So literally speaking the topology of the data set is pretty trivial is just a bunch of points. But if you think that, oh, these points are sampled from some underlying manifold and that manifold has a shape to it and has topological features, um, can you, in some way, use that data set to estimate or to get a handle on, uh, what those topological features are? And the answer is yes, you can, and you can do it by using kind of really, uh, basic techniques in algebraic, topology, but automating it, you know, um, with the automating these computations. Yes. Right. And so that was the big breakthrough was to sort of their old ideas, but, um, with a new perspective, applying them to, to data sets, discrete data sets, right.  

Carina    00:42:01    And then, um, developing algorithms to automate them, improving theorems that tell us that the approximations that we make in these algorithms are actually reasonable and will give us the right answer. Right. So there are different things, there are different ways in which we have to kind of cheat when we implement these algorithms. And, uh, one of the important things that needed mathematical justification was that in some, you know, in some limit you really are approximating the right answer. And so those foundations were, um, were laid down, you know, and still being worked on in various variations, but the basic ones were laid down, um, by a bunch of topologist who started working on this. So I, so people who really were coming from that pure topology community and, and decided to start working on applied topology, I think they had DARPA support early on, there were some early applications to robotics that, uh, excited various government agencies. Um, and you know, and then, and now’s, I think it’s, it’s completely taken off and, and is infiltrated in many different communities, including neuroscience.  

Paul    00:43:12    How so in, in neuroscience, what, what does that, how does topology apply to brain data, et cetera?  

Carina    00:43:20    So, I mean, I think in neuroscience, it, it is still, um, much more at the beginning. Um, but on, on one level. So I actually, I wrote an, I wrote an article about this maybe back in 2017, which is what ology, tell us about the neuro code. So high level, I mean, on some okay. So on some level, uh, and this is one, one of the things I articulated in that article, uh, you know, neuroscience produces lots of data, right? So just in the sense that topological data analysis is another lens for analyzing data, for trying to do dimensional reduction for looking at features of data sets and so on, you know, that there’s kind of an obvious hope that it might be applicable, um, just by the nature of having large scale data. Um, but there are, I think even more compelling reasons to look for topology, which is that I think that there are, there’s some evidence, right?  

Carina    00:44:14    That there are areas of the brain that are really encoding topological features of stimuli, as opposed to being so focused on the, on the metric properties. And so one of the places where, uh, you know, sort of the early places where we found that there was a really natural application of ideas from topology was looking at, at something like hippo, Kempo, play cells and place fields. So, and I’m just thinking about really the classic setup, where you have, you know, a rat roaming around some open field environment, um, recording, uh, play cells and, and CA one or CA three of the hippocampus. And, you know, you have these individual neurons that appear to be acting as sensors for, uh, localized regions of the environment. So, you know, you can think of the place fields are like these oval, like regions hotspots, where when the animal crosses through one of those regions, the neuron associated to that region starts firing a lot, but these regions are overlapping. So you have, you know, many neurons, right. And every neuron has its own place field. And you can think of, you know, people typically think of the, of the place fields as covering the entire environment that the animal is in.  

Paul    00:45:37    And thus we have a cognitive map to navigate an environment  

Carina    00:45:41    Like a co a cognitive map. Right. But interestingly, right, the cognitive map, if you think about the activity, the neural activity, um, itself, it doesn’t necessarily encode, uh, the map itself, right? So you have this, um, how do I say this? The, the place field is something the experimental, the experimentalist computes, right? So you do it by correlating positions to the firing of this neuron. And so if you know, all the positions that the animal went through and you know, who fired at every position, you can construct the place fields, but within the brain itself, sort of like the only information really is the spiking of the neurons. And, um, the place maps, the cognitive map is not a topographic map. You know, the, the same cells can either overlap or not in different environments. Um, it’s not like retina toppy in the visual cortex.  

Carina    00:46:36    Uh, it’s very different. And so in some sense, if you think about what information is really available to the brain, it’s not the position information necessarily, um, because there’s no sensor for position, but somehow it’s the information about when place fields overlap. So if two cells fire at the same time, or in a small time window, you can infer that their place fields overlap, because there must be some position that’s activating both neurons mm-hmm <affirmative>. And so if you look at the co-firing, if you look at the neural code that emerges from the animal roaming around this environment, it gives you information about which place fields overlap, but not where they are. So you don’t know, I’m just looking at the spike trains. If I’m just looking at the spiking, I can say, all right, the place field from neuron, one overlaps with the place field for, with the place fields from neurons five, seven, and 15, also five and 15 overlap.  

Carina    00:47:36    And there’s a three-way overlap between one, five and 15. I mean, you can say things like that, but you don’t necessarily know where the place fields are located. And so one of the questions, um, back in like 2007 that, um, Vladimir and I asked actually, so we were, we were at the time, uh, we were sort of talking and collaborating with the BJA lab. And so thinking a lot about place cells. And so on, one of the questions we asked is like, well, what, what can you learn about the environment? What can you learn about the, the stimulus space, just from that, just from that information, just from knowing which place fields overlap without knowing where the place fields actually are. And it turns out that you can know, you can learn quite a bit about the topology of the environment. You can learn if there are holes in the, in the environment, like if the rad is roaming on a, on a table that has a hole cut out of it, or maybe, you know, a tree in the middle, some place that it can’t, it can’t traverse mm-hmm <affirmative>.  

Carina    00:48:32    Um, and, and you can do that using methods from algebraic topology, because in fact, algebraic topology allows you to infer topological properties of a space from what’s called an open cover. So an open cover would be like a bunch of open sets that cover a space. And now you can imagine, you know, going back to my tourists or, or, or a sphere or some other, um, some other topological space, if you take a bunch of open sets that cover it, and that cover is good in some sense that I’m not gonna in technical sense that I’m not gonna describe, but if you have a bunch of sets like that, which you can think of like place fields, right. Covering your topological space and you know, how they overlap. So you can, you can, um, encode the information of how they overlap in a combinatorial structure called the nerve of the cover.  

Carina    00:49:26    They actually called it the nerve of the cover. Yeah. Like a hundred years ago, no connection to neuroscience. But, um, you can encode that in this combinatorial object called the nerve of the cover, which is a simp complex, which is it’s, it’s a generalization of graphs. So graphs, you know, have edges between pairs of objects. This is like a graph, but instead of just having edges between pairs, you can also have like a triangle between a triple or a Tetra Hedron between a quadruple and so on. And that’s keeping track of the overlap. So if I have place fields, you know, one, five and 15, and they not only pair wise overlap, but there’s also a triple overlap then instead of having a graph where I just connect 1, 5, 5, 15, and 15, and one, I also fill in this triangle to, to denote that there’s a, a, a triple overlap as well.  

Carina    00:50:14    And so, um, that overlap information, you know, topology land gives us this nerve of the cover, this simp complex, that encodes all of that intersection data. And, and, and there’s a, theorum called the nerve LA, um, or the nerve theorem <laugh> depending on, on which version. Um, and that mathematical result tells you that you can compute these topological features, uh, like homology groups, uh, which count these holes that I was talking about earlier, you can compute them from the nerve, from that combinatorial object of the cover. And it will tell you what the, what those same topological features are of the underlying space that’s covered by those open sets. So this is exactly like the place field setup, right? So the place fields are like the open sets, the animal’s environment is like the underlying topological space and what this theorum, you know, a hundred year old theorum from algebraic. Topology is telling us, is that just from that commonatorial data of who intersects with who, even without knowing who the place fields are, right. Who the open sets are, just knowing how they overlap. You can compute topological features of that underlying space that was covered. So this is like really kind of a magical, sorry, go ahead.  

Paul    00:51:37    Sorry. No, I was gonna say which, which map onto in, in the case that we’re talking about the example, which map onto the environment, essentially with holes and borders and so on.  

Carina    00:51:47    Exactly. Which map onto the environment. Exactly. And so, and so in the neuroscience context, you know, what this translates to is the neural code, intrinsically encodes, topological properties of the environment in the case of say, hippocampal place cells. And so that I think is a really lovely insight, right? And so that, that neural codes, um, when you look at the structure of the entire code, it’s not just about, you know, which pattern of neural, which pattern of firing corresponds to which stimulus, but somehow the collection of all the firing patterns, the collection of all the code words has structure in it. And that structure of the code reflects structure of the stimulus space that’s being encoded.  

Paul    00:52:32    Well, we can’t record all neurons right. In, let’s say hippocampus CA one or something. Right. So you’re only sub sampling. So does that affect the, uh, ability of a topological analysis to, you know, complete the cognitive map?  

Carina    00:52:47    Yeah, absolutely. So, so when we, so, you know, Vladimir and I wrote a paper on this back in 2008, where we, we simulated, we simulated place cells from place fields, you know, using parent using sort of properties of place fields that were taken from, from real data, but, you know, making many of them, right. So that we completely cover the space as a proof of principle that this would actually work. Um, and so that, you know, again, because of limitations of data, right, we, we had to kind of make synthetic data that matched in terms of properties with real data, but was nevertheless, you know, um, complete, right. So you can completely cover the space. Yeah. However, I should say a lot of advances have been made by now. And so I think the most recent result really looking at these topological properties in, in neuroscience, uh, is coming from the Moser lab.  

Carina    00:53:38    So they have this, uh, nature paper that was published just last year. I forget the name Toro structure or something of in grid cells. So they actually were able to record lots and lots of cells enough to, um, to really cover the space, but they, in their case, they’re doing grid cells and grid cells are, are interesting because, um, there, you have this repeating grid-like structure. So the grid cells are, are kind of like play cells in the, in, in that they are responding to, you know, positions in space except that a single neuron will have many hotspots that it responds to, and they form a hexagonal grid. And so different neurons will have different hexagonal grids that they’re associated to that are kind of shifted from each other  

Paul    00:54:28    In, in scale and orientation. Yeah.  

Carina    00:54:31    Yes. In scale and orientation. And, uh, and so, and they, they really can be grouped into these modules because, uh, if you have a different scale to your grid, you wanna sort of group all those neurons together that ha that have different phases and different orientations, but are sort of on the same grid size. And so they were able to record so many neurons that they could capture enough grid cells to cover the space with various modules, if that makes any sense. And, and there, because of the struck, because of the sort of hexagonal structure of the grid cells of the grid cell, um, map, it was, uh, predicted from topology considerations that, that those neurons, uh, should trace out a TAUs that paper is, is really a tour de force in terms of recording enough neurons, that they can actually see the, they can actually see that TAUs, they can see the topological structure of the TAs in the neural activity. And, and, and again, it just, it just points to topology being fundamental to neuro coding. And I think that we’re gonna see that in, in more and more areas, more and more systems. And I think that’s, that’s sort of an exciting reason to use topology neuroscience that goes beyond just, oh, it’s data. Let’s analyze it with topology because, because we can, you know, I think there’s something more fundamental than that when it comes to neuroco in particular that oftentimes what’s being encoded really is topological in nature.  

Paul    00:56:04    So the, the structure of like, um, a topological structure, well, what I, what I want to ask is, um, sort of really high level, how, how topology relates to the dynamical systems theory, because a lot of there’s a lot of neuroscience talking about manifolds, reducing dimension, and there are trajectories along these low dimensional manifolds. And that’s how we can, uh, sort of describe in a way some cognitive function as having some manifold shape. And so, anyway, it sounds like topology and dynamical systems theory are highly overlapping, but how, how do they differ? What what’s different between the two?  

Carina    00:56:41    Yeah. So that is a good point. I mean, even in the, even in, in the math community, there is a, a long tradition of dynamical systems into anthropology overlapping. Okay. I should say. Um, and so that, that is not just a, a phenomenon in neuroscience. That is something that goes way back. Uh, and that is because, uh, dynamical systems, when you look at the configuration spaces for them, um, when you look at sort of the set of all the states that the system can have, or the attracts of the system and so on, uh, those can be often described by manifolds that have certain topology. And so understanding topology has always been fundamental to understanding dynamical systems. I see. Um, OK. Yeah. Uh, and, and I think in neuroscience, uh, it’s, it’s a, it’s a similar connection. So, and this is another way that, that topology can potentially play a, a big role in neuroscience is, is an understanding those lower dimensional manifolds.  

Carina    00:57:42    So, uh, when people talk about sort of dynamic attracts or they talk about, um, manifolds of activity, in some sense, they’re really thinking about attracts, right? So they’re thinking of a dynamic, the network is kind of this giant dynamical system, but the activity, the population vectors, or what have you, whatever you’re tracking of the activity are getting sucked into a lower dimensional space, um, and not sampling the full repertoire of possible states that you might think you have. And those lower dimensional structures. Sometimes it might be appropriate to call them attracts of the network. Um, and, uh, but even if not, you know, you, you, you would wanna describe them topologically. You would wanna know, okay, is that like a periodic structure? Is, is it a TAUs, is it a, uh, is it a loop that comes back to itself? There are all kinds of, you know, questions that you could ask about the shape of that. And that would sort of get to the heart of what the network is doing. So somehow you would have this complicated dynamical system, which is your neural network, but the emergent activity of that network would sort of get like trapped or sucked into this lower dimensional manifold that itself. Um, if you tried to describe, it would have some topological properties. And so the, that topology of that manifold is somehow a property of the network and of the way that the network functions.  

Paul    00:59:11    Okay. All right. We gotta, uh, I really wanna cover the, uh, linear network. So I’m just gonna skip ahead a little bit to those. So beautiful models, combinatorial, linear threshold networks. <laugh>  

Paul    00:59:26    <laugh> um, this is what I originally wanted to talk to you about also. Uh, and then of course I did a deep dive and then all the, all the other questions come out, but these are, uh, interesting in that I think that they apply to your, um, beautiful model, um, camp, because they’re even more abstracted. Um, I’ll maybe I’ll just try to describe them and then you can correct me and then we can talk about what they do and stuff. So fir first of all, sure. One of the goals is to, uh, be able to look at the structure of a network and then predict the dynamics that if you ran that network predict the dynamics, that it would give rise to different attracts, et cetera. That’s I know that that’s, that’s one of the goals. So, I mean, that’s an interesting thing because there’s this structure function problem, uh, in neuroscience, right with Eve martyr’s work for, for example, if you look at structure, uh, lots of different structures give rise to, can give rise to the same function, depending on your parameters that you’re using some, you know, you can call it multiple realizability.  

Paul    01:00:28    Um, but it’s also, you know, daunting to the thought is that we may never be able to look at a structure and know function. But if we, if we can look at a structure and know some dynamical properties that might lend something to say about the function anyway, I’m rambling, but what you’ve essentially done, and maybe we can talk about like the simplest, you know, smallest ones, and we don’t need to go through examples, but you’ve, you know, you’ve gone down to like even two nodes, three nodes, four nodes, and abstracted away into this graphical network where the nodes, um, can, can be connected. And, and then the nodes are all excite like excitatory neurons. And when one node points to the other, it’s going to excite that neuron. Meanwhile, in the background, you have this invisible writ large inhibitory activity that is affecting the whole network, but with kind of different rules for how it affects, um, different nodes, depending on which nodes are pointing to which other nodes and connected in certain ways. Um, and that sounds, you know, very complicated, but it’s actually stripped down in very simple way, which makes it tractable mathematically. Correct?  

Carina    01:01:37    Yeah. That’s a great description. <laugh> actually, yes.  

Paul    01:01:40    Oh, good. Okay. So mm-hmm <affirmative> um, and I know this is kind of related this, this goes back to the idea of hop field networks, because these are, you know, recurrent networks cuz there’s loops and loops and loops depending on how, how big you build it up and hop field networks are kind of a, uh, special case of this because they all, they have like symmetric connections between all the units and et cetera. So with, with that kind of background description in mind, what have we learned about, you know, how structure, you know, how far can we get looking at a structure? So you’ve done this from, you know, two nodes together up to, I don’t even know how big, uh, you have gotten so far, but where, where you’ve been, been able to prove that if you set a node up this way, or if you can look at all the connections, if there are different syncs and sources and the way that, the way that different nodes are connected without even fooling with, um, weight connection parameters, you’ve been able to prove what kind of dynamics can emerge from that structure.  

Carina    01:02:37    Yeah. We’ve been able, I mean, not completely, but we’ve been able to say a lot. Um, so <laugh>, uh, and it’s been exciting. Um, so the, yeah, so the basic results that we have are kind of, I think you mentioned graph rules. Uh, so what are these? These are, these are direct relationships between graph structure and fixed point structure. So I, I should say the way we have access to the dynamics of these networks is, uh, primarily through the fixed point structure, you know, familiar to anyone who studied hot field networks or, or has studied, you know, any kind of, uh, networks in, in neuroscience, um, is the idea of stable fix points. So, uh, when your activity goes to, you know, a single firing rate vector that kind of persists persistent activity is often modeled by a stable fix point. And in the hot field model, the memory patterns are all modeled by stable fixed points of the dynamic.  

Carina    01:03:37    So these are really a, a single point in your state space. So I have a, a PA a pattern of firing and I’m just stuck there. And, uh, but there are many of those patterns that I can get stuck in. So if you imagine kind of this landscape, and there are these minimum of your landscape and what the, the computation of the network is to take an initial condition, evolve the network dynamics until you fall into one of these minimum and that’s your recovered pattern. And so that initial condition could be a partial pattern of, you know, maybe, uh, an image that has missing pixels and has like a, a partial pattern. And then the network would evolve it to do pattern completion, to fill out the rest of the, of the pixel values.  

Paul    01:04:21    So if I had a degraded picture of a mouse face or something, I would easily, right, because of my, that pattern completion completion, my hot field network inside my brain would settle down to the mouse pattern and I would be able to say, oh, mouse.  

Carina    01:04:34    Exactly. Yeah. So that corrupted pattern would put you in a, in an initial condition that is in, what’s called the basin of attraction of the complete pattern, which is stored already in the network. And if you don’t have that pattern stored already in the network, you won’t complete it. So you can only kind of complete patterns that have been previously stored in the network. And so you imagine the entire state space, right? All the possible firing rate vectors, um, gets partitioned into these basins of attraction, which is like the region where if I start there, I’m gonna get sucked into a particular attractor mm-hmm <affirmative>. And the attracts here are all just stable fix points and they correspond to sort of a single pattern that’s been stored or a code word. You might say the neural code of the network is like all those stored patterns.  

Carina    01:05:18    Um, and so the hot field model was really, uh, about that setup, right? And so the questions that were asked were how do I store patterns in the network? What kind of learning rule? Um, once I have all these stored patterns, am I gonna get extra spurious patterns that I didn’t mean to store, right? Because sometimes it’s like, it’s not clear that I can just arbitrarily choose pattern a, B and C without accidentally creating a new pattern that I didn’t wanna create. And if I create too many patterns, then my basins of attraction get too small and too finicky. And so then I can’t really complete, you know, I might start with that corrupted mouse picture, but because I have so many different patterns store, maybe I will accidentally go into the wrong attractor. Um, and so those are, there are issues of capacity. How many patterns can you reliably store without, you know, and, and still recover them with reasonable initial conditions and so on.  

Carina    01:06:11    And so that’s this, this whole game of how memory works, right? So this is kind of a framework for thinking about memory as, um, consisting of these stored patterns that are really encoded in the synaptic weights of the network. And in the case of the hot field model, they really are all kind of stable fix points and the model itself in order to guarantee that has symmetric weights. So that just means that between any two neurons, the connection from neuron eye to neuron J is identical to the connection from J to eye. So that is, you know, not true in the real brain <laugh>, um,  

Paul    01:06:49    <laugh> despite processing well, yeah, no, not true, not true,  

Carina    01:06:53    Right? Not  

Paul    01:06:54    True. Brain’s messier, but your models, but your muscle models are super abstracted and not messy either. Right. So sorry to interrupt, but  

Carina    01:07:02    Yeah, so, so it’s not true. The brain is messier, but like this particular messiness can be done, you know, by simply not having symmetric weights anymore. But the, and, and so, uh, in the combinatorial threshold, in your network models, we don’t require symmetry, right? So we don’t require the connections, the connections to be symmetric. Um, and, and that just means we allow that sort of w matrix of connection strengths to have a different IJ value from its value that opens up the dynamics into a whole richer world of possibilities. So it turns out when you, when you impose the symmetry, there are some other conditions that I’m sort of gonna gloss over. But roughly speaking, when you impose the symmetry on these weight matrices, you can get multiple stable fix points, which is a feature of nonlinear dynamic. So you get nonlinear phenomena, the multi stability, but you can’t get dynamic attracts. So you’re still stuck with only having stable, fixed points as your possible attracts.  

Paul    01:08:07    Why do we want to get dynamic attracts though?  

Carina    01:08:09    Because the memories <laugh>, that’s a great question, right? Why do we wanna, so, uh, you know, if all you care about is taking corrupted images and pattern completing you don’t care about dynamic contractors, right. And if that’s all, if that’s all your brain wants to do is, is look at a fragmented picture and say, oh, that’s a dog. You know, and that’s a cat. Then you, you don’t really need dynamic contractors, but need to end. But, uh, I would argue that, well, first of all, when we look at brain activity, it does not appear to go into fixed points. It’s always oscillating, it’s always constantly changing. And, um, and one of the things that I think the, you know, neural recordings show us, which is why the sort of manifold manifold methods and so on are, are, are so popular these days is that often what happens to the neurons in a network in response to a stimulus is they don’t go to a stable fix point.  

Carina    01:09:07    Um, but they do get sucked into some lower dimensional manifold or what I might call an attractor mm-hmm <affirmative>, um, that is dynamic. So you might have a, you know, stimulus comes in, you have some big transient response potentially from the network. I used to work in auditory cortex when I was a postdoc. So it’s like, we would see, you know, we’d have a sound played. We were recording from primary auditory cortex and rats. We’d see some big transient response, lots of neurons firing. And then the activity would like settle down and there would be some subset of neurons that would continue to fire a lot, but not in a stable fixed point kind of way. There would still be some kind of oscillation, some sort of dynamic activity, but in this lower dimensional space. So it would be kind of sucked into like a, you know, like an attract, um, that is dynamic. The other thing I, I, I should say is that, you know, beyond seeing this in experiments, it’s clear that we have, we are able to store memories that are dynamic, right. So we can remember sequences. We can remember, you know, the alphabet, we can remember songs  

Paul    01:10:13    8, 6, 7, 5, 3 0 9. Is that  

Carina    01:10:16    8 6, 7 5 3 0 9. I put that in one of my talks.  

Paul    01:10:20    Yeah. Because give an example of a network that can encode if you assign the, a number to each of the node nodes of the network. Yeah. Uh, the, it, it spits out this limit cycle, which encodes at the peak of every neuron’s firing every unit’s firing. Is it spells out the phone number anyway, which is the famous song by, I don’t remember who, but, so there are these like dynamic sequences and limit cycles and all these different, um, types of attracts, right. That you can get out of these networks.  

Carina    01:10:48    Right. Exactly. So it’s like when I’m remembering a phone number, I, you know, it’s, it seems perfectly reasonable to model that as limit cycle. Like I remembering a sequence and, and we don’t do this anymore. Right. Because we have cell phones and we just have, but I, you know, we used to like, look up phone numbers in the phone book and then have to like, remember it until we got to the pay phone, you know, to, to, to dial it. And, um, and I, I would play it in a loop. Right. I would, I would, I would think of the number in, in a loop. And, and, um, and so there’s something very dynamic about the way many memories are encoded. I mean, I should also say, um, there’s also this whole area of central pattern generator circuits, which I’m very interested in, uh, which has traditionally been modeled with limit cycles, um, in the, in the sort of mathematical and theoretical neuroscience community. Um, I mean, some, some of Eve martyrs work, um, with theorists right. Has been very specifically about modeling central pattern generator, circuits, and, uh, yeah. And there, you see limit cycles all the time. You just, you see, um, periodic sequences of neural activity that repeat, like, if you imagine a animal locomotion, right. You, you have a, a horse walking or troting or something, and there’s a, there’s a, it’s a periodic of repeating, uh, sequence of signals activity  

Paul    01:12:10    Left hoof, right. Hoof back hoof forward ho right. Yeah, exactly, exactly. Exactly. You’ve also demonstrated that’s that’s something else, but so then you, you, you have these examples that you use in your talks, um, and, and papers of these kind of, um, different kinds of attractors. But, and, and I’m sorry if I’m skipping ahead too much, I’m just aware of our time, but I’m just wondering, in some sense, these are still small kind of toy models. And the background inhibition that I, that I talked about, there are just two rules right. Of this inhibition that affects all of the units the same way. Mm-hmm, <affirmative> a, I’m wondering, you know, can we really apply this to real messy brain data, right. And, and look at, you know, recording arms and, and say something about the dynamical structure and, and prove it mathematically, right. In some structure of the brain. Uh, and then B like, just how far up do you feel like you can scale these models in a attractable way? Uh, cuz I know that you have like submodules and there are rules for modules being motifs, being embedded within larger motifs. And you can still look at that structure and talk about some of the chaotic and well talk about some of the types of attracts that come out. So is this something that just will scale easily?  

Carina    01:13:25    So it’s true that the, the examples I show in my talks, which are the sort of like the ones we’ve, we’ve studied the most, you know, kind of show off the, the techniques. Yeah. Um,  

Paul    01:13:34    Which are super cool by the way.  

Carina    01:13:36    All right. Tend. And they tend to be, they tend to be small because where you sort of designing them to. So one of, so one of the questions that we like to ask is how do you encode multiple dynamic attract at once? Right? So if you go back to the hot field model, right? The, one of the challenges was like, okay, how do I store multiple memory patterns as fixed points, you know? And, and, and, and keep them all right, so that you can still recover them and you don’t have too much interference. And if you ask that question about dynamic attracts, right, it’s, it’s trickier, cuz now you’re no longer talking about, you know, isolated, stable fix points. Now you have these, these larger, you know, these dynamic attracts that have, you know, it’s a subset of the state space, it’s a big loop of activity and you have many of them and they may share the same neurons between them.  

Carina    01:14:24    And um, and you wanna know like how can is, is it possible to store those in a larger network in sort of a modular fashion that preserves the dynamic attractor, even as you’re adding other stuff to the network. And, and what we have found, um, are rules that allow us to kind of predict when, you know, one of these sort of smaller networks that produces a dynamic attractor when that dynamic attractor will persist when embedded in a larger network. And, and, and sometimes we can predict that will happen. And sometimes it’s, it’s it’s happening, even though we don’t have a good handle on why, but there’s something about the, the inhibitory structure that really helps to be able to like encode these structures. And so I would say that the, in order to scale things up, we really do sort of depend on this strong, locally strong inhibition in order to sort of keep the attracts clean.  

Carina    01:15:23    I don’t know if that makes any sense, but somehow the inhibition is what allows the attracts to really lock in, um, and stay there even as you make the network larger. And so we’ve done these experiments for instance, where we kind of, one of a big thing that we, that, that we do is we try to classify motifs that support attractors, right? So one of the questions you might ask, like, okay, I have this large network, what are the sub structures that are really supporting these dynamic attracts that, that you might see? So going back to kind of the, uh, experimental data that I was telling you about where you might have some large transient activity, and then it settles into some subset of neurons that are, that are really firing, but in some kind of pattern that’s dynamic, uh, so you, you might ask, okay, well, there’s a sub network for those neurons alone.  

Carina    01:16:13    And somehow that sub network is supporting this lower dimensional attractor. And, uh, but it’s embedded in the larger network, right? So why, how does that happen? How can you take, uh, a fit, a dynamic attract that is really supported on a small network and, and still have it persist when you’ve embedded it in the larger network. And that is, that is one of, you know, that is definitely something that we think a lot about and have made some progress on. So mathematically, one of the things we can prove theorems about is when kind of the unstable fixed point that’s associated to the dynamic attractor, when that one will persist. And that gives us clues as to when the attractor will persist. And so via the fixed point structure and via the way the fixed point structure changes or is preserved when a sub network is embedded in a larger network, we, we get sort of strong clues as to whether those attracts survive. And so that’s the sense in which we, we think about scaling it up. So we, we, the perspective is that somehow you have modules, you know, subnets in the brain that do things right, and you wanna, you wanna understand how they can be embedded in the larger network. So let’s say they can still do their thing and not be kind of destroyed in terms of the activity they produce, by the way that they’re embedded. So that those are sort of the questions that we have.  

Paul    01:17:37    I’m sorry to Rob, is this something that would like potentially solve the binding problem as you know, you think like, uh, looking at a horse or like, if you’re in front of a horse, right. You see the horse, you smell the horse, you can hear the horse and all that. And it’s like a cohesive experience, right. Are these submodules, mm-hmm <affirmative> that are, uh, in some topological, uh, Toro, uh, shape that’s, uh, that somehow, and that’s the binding Walla.  

Carina    01:18:02    Yeah. I mean, I, I I’ve thought a little bit about the binding problem because I, you know, I don’t, you know, I don’t wanna get too far ahead of myself, but on, on one of the cool things that we’ve found and that we can predict via the fixed point structure is these attracts that we, we call them fusion attracts. So it’s almost like it’s like, oh, I have an attract, um, you know, some dynamic pattern supported on sub or fixed point, you know, supported on one sub network. And I have another sub network that has another kind of attractor, and if they are embedded so previously I was saying, okay, there are ways we can embed those sub networks to preserve the attractor in the larger attract right. In the larger network. Sorry, I meant the larger network. And so that’s one question, right? How do you embed so that it’s so that these individual attracts are preserved in the larger network, but there are other ways to embed them where you lose the individual attracts, but you gain an attract that really looks like a fusion between the two.  

Carina    01:19:01    So it’s like, they’re both going at the same time. And that is really striking. Um, because now somehow I’ve been able to take these different pieces and connect them up in such a way that I still see the individual attracts going. So maybe I have a three cycle going for, you know, some subset of neurons and, you know, a fixed point or some other dynamical attractor going for another subset of neurons. And now I’ve embedded them together. And I’ve connected them in such a way that in the larger network, what I will see is them turning on together into a single higher dimensional attractor that encodes kind of both attracts at once. And that feels like a binding kind of thing.  

Paul    01:19:45    And or it, the emergence of consciousness or something. Yeah.  

Carina    01:19:48    <laugh> yeah, no, not sure about consciousness. And I don’t think C I don’t think CTLs are sentient <laugh>  

Paul    01:19:58    Oh, come on. Why is that getting so much traction? Yeah.  

Carina    01:20:03    But  

Paul    01:20:04    So I, I know that, um, I’ve already taken you over, but, um, maybe the last thing that we can end on is, is just how you, you know, how plausible is it given how messy the brain is. And, and, you know, we haven’t even mentioned that for, just to bring it back to like a deep learning network where the whole thing is about the weights changing and learning, right? Mm-hmm <affirmative>, these networks are so abstracted that you fix the weights, uh, you fix like the parameters and there’s very few parameters, which is what makes them tractable, uh, in the first place. But then applying that to real wet brains is, is there gonna be a clean mapping there?  

Carina    01:20:38    Yeah, so I should say, so the, the CTL ends, I do think of them as having fixed weights for a graph, but there are these parameters for the weights, right. That we are allowed to change. And so one of the goals, right, is to the graph rules, for instance, which tell us how to relate fixed points to the structure of the graph. Those are parameter independent. So I can change the weights within some what we call a legal range. I can alter the weights and preserve the basic, um, structure of the fixed points. So that’s, you know, so we do sort of, we are trying, and we often do prove results that are, that are largely independent of those weights. That said, you know, even with a two parameter family is still a very small number of parameters for, you know, a network of end neurons.  

Carina    01:21:27    And so, um, we also study more general threshold in your networks. And one of the things that we have found is you often have the same attractor, um, that can appear in different networks. I mean, this is not so surprising given what I’ve just told you, that we, we think about how do you embed a particular subnetwork so that the dynamic attractor is preserved. Right? But so once you understand the rules for that, then what that tells you is that there are weights that you can change in the larger network that keep the attractor intact. And so that really addresses kind of the, the learning.  

Paul    01:22:08    So there’s a robustness.  

Carina    01:22:10    Yeah. There’s a robustness. And there’s this concept that, you know, once I embedded a dynamic attractor, you know, with say some subset of neurons, five neurons, 10 neurons, right. In a larger network that maybe has hundreds or thousands of neurons, um, there are now edges that I wanna freeze, right. There are weights that I don’t really wanna change because I wanna preserve my attractor. Um, but there are other edges, there are other weights that are gonna be free and I can freely move them and still keep this attract. And I have these, I have these sort of simulations where you show moving through that weight space, and it’s really striking because you see this, you know, different dynamical systems, you know, solutions to different dynamics. And they’re just right on top of each other, it’s really like the same attractor, but I’m changing edge weights.  

Carina    01:22:58    Um, and I’m changing the network, but I’m really preserving this attractor in this piece of it. And so, so the, I think that the way that we are, what we wanna understand is, you know, who’s allowed to be plastic, right? What weights are allowed to change in order to preserve an attractor and what weights, if we change them, we may actually lose the attractor or distort it. And understanding that is really kind of the key to sort of developing learning rules that would allow you to store multiple attracts in a network, um, you know, without distorting them, because now it’s no longer an issue like in the hot field model, the question is how, how many memory patterns can I store and not lose, lose the previous ones, but here it’s not just about losing the attractor. You can corrupt it because now it’s not a point anymore.  

Carina    01:23:47    Yeah. Right. It’s a whole dynamic attractor and you can really distort it. And so how do you preserve the full dynamic attractor and in this larger network? And so we are, yeah, so we are studying in the, in the, in the broader TLN space. So not confined at the combinatorial threshold linear networks, which are our subfamily, but in the broader family of threshold, linear networks, we’re in principle, all the weights are allowed to change freely. We are trying to understand which weights matter and which weights don’t matter for particular attracts and thereby figure out where the plasticity is. And so the, the vision is that somehow the networks in the brain, once you encode certain attractors, you might wanna kind of stabilize certain synapses, but others are left free and can be freely changed to store a different attractor, some other song, some other pattern of activity. Um, and, and, and still guarantee that you, that you preserve the first one. Right. So that’s how, you know, you and I can learn the same song and not corrupt everything else we know,  

Paul    01:24:57    But I can learn Jim’s phone number too. And, and Mary’s, and, uh,  

Carina    01:25:01    Right. Yeah. And I don’t have to know those, and somehow you need to know those <laugh>  

Paul    01:25:05    Right. But yeah. But I need to make a song out of them to remember them. Of course. As we learned with, uh, what was her name in that song?  

Carina    01:25:12    I, Jenny,  

Paul    01:25:13    Jenny, I believe  

Carina    01:25:14    It was Jenny Jenny’s  

Paul    01:25:15    Phone number. Yeah. There you go.  

Carina    01:25:16    There’s yeah. <laugh>  

Paul    01:25:16    Okay. Uh, okay. Karina, I’m glad this is all, this is all so easy for you. And I appreciate you talking with me. I promise this is the, um, last question. How much more math are we gonna need to learn to? Well, quote, unquote, understand the brain, you know, are we gonna, do we have the right math tools or do we need to, you know, how, how many more different math tools do we, will, we need to apply, invent and or apply?  

Carina    01:25:43    Ah, that’s a loaded question. I think we’re still very limited in our math tools, honestly, so, and I’ll just, you know, quickly say why, uh, neural networks fundamentally are high dimensional, nonlinear, dynamical systems and high dimensional nonlinear, dynamical systems are a nightmare mathematically. <laugh>, it’s really hard. And if you, if you see what people do when they analyze networks, I mean, it’s, it’s striking. And to this day, 20, 22, you know, people will simulate large nonlinear networks, you know, that do all kinds of crazy things. And then when they come down to analyze them mathematically, they linearize, it’s like linearize around a fixed point. Yeah. Right. Which by the way, requires that you have a fixed point to linearize around and, and, and, and a single one that somehow is representative of the activity of the whole network, which, you know, when you have, when you think about, you know, that real networks probably have multiple fix points, multiple attracts, you know, you don’t, you can’t just pick one point in the space to linearize around.  

Carina    01:26:50    And, um, so I think there’s a, there’s a fundamental gap between the tools we have to analyze large dynamical systems, which tend to be just linear algebra tool, linear tools, and the, the beast <laugh>, which is high dimensional non-linear dynamics. And that’s a gap that exists in the math world too. It’s not that mathematicians secretly have books where they figured out all of this stuff. Um, it’s just, it’s just a real gap. And I think with threshold in your networks, you know, I am trying in my own way to, to fill that gap in some very simplified, you know, kind of the simplest nonlinearity you can have, but, you know, fully high dimensional systems of differential equations with a very simple ity in trying to actually develop, um, mathematical theory that would, you know, allow us to predict. So, you know, when it comes to linear systems, I can give you a giant matrix.  

Carina    01:27:49    If you have a linear system of differential equations, we teach our undergrads how to solve these. Yeah. And you can predict long term behavior. You can predict, you know, if you have a Markoff chain, you can predict the long term behavior, the long term dynamics. You can, you can answer that question mathematically, you put non nonlinearity in there and now fucked and now you’re screwed <laugh>. Yeah. And, and, but with, but if the non-linearity is simple enough, like threshold linear, you’re not screwed, not completely. Right. And that’s, that’s kind of what the threshold linear network is trying to do. It’s trying to generalize the theory of linear systems of differential equations to a mildly non-linear mildly, but like still dramatic enough that you get, you know, dynamic attracts and all these things that you don’t see in linear systems, linear systems, you have one fixed point is either stable or it’s unstable, and that’s the end of the story.  

Carina    01:28:44    But, um, once you put in that threshold for the non-linearity, now you get the whole repertoire, you get chaos, you get limit cycles, you get multi stability, you get all of that stuff, but you still somehow can piggyback on the tools of linear differential equations in patches, right. With combinatorics, with care, right. You can kind of bootstrap from there and, um, and, and, and get access to the nonlinear phenomena. So that’s what I’m, that’s what I’m trying to do. And I, and I, and the reason I’m trying to do it is because I do think it’s a, it’s a real mathematical gap and there are many, yeah, I think in, I think neuroscience is really a field that is experimentally super advanced, I would say, um, compared to, you know, 19th century physics, you know, <laugh> but, um, mathematically, I think there’s a lot of, there are a lot of challenges and that’s part of why our theory and our modeling is still somewhat limited, um, because we’re still relying on linear methods all the time. And, uh, and that’s really limiting our data analysis and our dynamical systems modeling and just our theory in general. And so, yeah, so unfortunately, well, unfortunately for, for some people, fortunately for mathematicians like me, I think there’s a lot to do.  

Paul    01:30:15    One person’s challenge is another’s opportunity. So exactly thank you for, for my opportunity to speak with you. And, and thanks for coming on.  

Carina    01:30:24    Thank you.