Brain Inspired
Brain Inspired
BI 131 Sri Ramaswamy and Jie Mei: Neuromodulation-aware DNNs

Support the show to get full episodes and join the Discord community.

Sri and Mei join me to discuss how including principles of neuromodulation in deep learning networks may improve network performance. It’s an ever-present question how much detail to include in models, and we are in the early stages of learning how neuromodulators and their interactions shape biological brain function. But as we continue to learn more, Sri and Mei are interested in building “neuromodulation-aware DNNs”.


Sri    00:00:03    The goal is to try and build neuromodulation aware, deep networks that are informed by principles of neuromodulation, simply because there is no one size fits all for neuromodulatory function. And it really depends on the brain region, the neuromodulator and the Saturday,  

Mei    00:00:24    We just want to open like this discussion that there are really like, oh, this, you know, monetary facts at different levels, including the UV finding dangerous, and the cell can really behave differently and their different contacts. And that, I think that’s also like one point of the paper. And so we really want to bring that into the scope.  

Speaker 4    00:00:48    This is brain inspired.  

Paul    00:01:02    Hey everyone, it’s Paul. So on the previous episode I had Eve martyr on the podcast and one of the things that Eve has done among many others during her career is study the effects of neuromodulation on neural networks in her case on a rather small neural network in crustaceans. But that conversation focused more on overarching principles and less on neuromodulation itself. On today’s episode, I speak with Shrikanth Rama Swanee, and Joe May. So Sri recently started his neural circuits laboratory at new castle university and may is a post-doc at Western university. And I thought it would be a good time to have them on because they recently collaborated on a review that is focused on how principles of neuromodulation might benefit deep learning networks. So in this episode, we dig deeper into neuromodulation itself and how it affects brain states and dynamics and how these principles might improve deep learning models, both in terms of their capabilities and in terms of modeling brains.  

Paul    00:02:05    So in the computational neuroscience world, there’s always the question of how much biological detail is needed to create good models. And of course, the answer depends on the question that you’re asking, but Sri and may believe that as we learn more about neuromodulators and different neuromodulator systems and how they interact to affect the landscape of ongoing neural net activity, it could lead to big improvements in model function without necessarily needing to just keep scaling up to bigger and bigger models. And they believe we’re just at the beginning of understanding those neuromodulation principles. Uh, and it’s clear that, uh, we’re at the beginning of efforts to incorporate them into deep learning models. I also solicited a guest question from Mac Schein who has been on the podcast before, and is also interested in these topics. You can find the show notes at brand 131, where I also link to the review that I mentioned, uh, which is called informing deep neural networks by multi-scale principles of neuromodulatory systems. Thanks for listening. Enjoy, I wanted to start a little bit with your background. So Sri, I know that you were a part, a big part of the blue brain project, which is a very different beast. So how did you, and we will talk about that project a little bit more in relation to this, but how did you come to be interested in, in neuromodulators?  

Sri    00:03:26    Uh, so, uh, as I said, when we were trying to tame this whole beast, uh, this, uh, this, uh, detailed model, probably the most detailed model that we ever built on, on, um, uh, the somatosensory cortex, well, we integrated all this hard one, um, anatomical and physiological data at the cellular and the synaptic level and integrated this really into, into a complex microcircuit level model. But when we turned off the switch, when we simulated this model, we were slightly disappointed because the model wasn’t doing anything terribly exciting, all it was doing was to, to fire in highly synchronous burst of action potentials. Um, so that really got us wondering, like, uh, I mean, what is this monitor doing? Uh, it’s almost resembling like an epileptic context, so, so what do we need to do to get it right? And that was when there was a bit of a Eureka moment.  

Sri    00:04:34    Um, when, when we realized that, Hey, look, we’ve been building this model based on data obtained from, um, brain slices, uh, which are all typically, uh, done at high levels of access, a little calcium, uh, but low and behold, uh, in the intact brain, uh, the level of access, a little calcium and actually, um, lower, uh, about one millimolar one to 1.2 millimolar. So what we then realized was we had to drop the levels to extract a little calcium in this little socket model for it to try to do something similar to the intact brain. And that’s when we came about, um, um, that’s when we talked about of ways to really, uh, decrease the level of extracellular calcium by, um, red using the transmitter release probability of synaptic connections in the morning, and then the emergent dynamics of the model, uh, began to make a lot more sense.  

Sri    00:05:45    Uh, it was a lot more these synchronous that kind of, uh, collaborated with what the intact brain was doing, but, uh, the brain has many other ways of doing this, right? So of course changing calcium is probably one way that we could do with this model, but another way, another huge spectrum that the brain tries to imply to change these network dynamics is to neuromodulators. Like I said, I call it right. So, um, as I call him the way it acts, um, uh, to, for example, to, to, uh, change the brain from, um, um, steep to wakefulness is, is really by, uh, modulating network activity from a synchrony to be synchrony. So that’s when we realized that there could also be other mechanisms at play to bring about this change in network dynamics. And that’s really what got me interested in, uh, neuromodulation.  

Paul    00:06:47    So may, how did you come to be interested in, in neuromodulators and neuromodulation?  

Mei    00:06:52    So, um, I have a background as a competition neuroscientists, and I did, uh, like a computation neuroscience project coursing. My master’s studies and I was working at the senior is near Paris, uh, was hosted by <inaudible>. So like this thing I started working on was to simulate the sensory perception, um, even the visual thalamus. And we had a really biologically detailed model, which means we use the morphologies of the compartments of the interneuron and let, to make on your network. So our, we call that neuron an effort cause Astrid represents a detailed characteristics in euros. So that’s why we call that in your own network. So we also had like recorded cell responses from, for example, that, that, um, cells, including the <inaudible> cells and also reticular cells and motivational this project was at a lump, the PhD candidate at the lab. He was trying to do the same using a different simulator called nest, and he was not really able to capture or replicate the, the properties of, uh, you know, visual thalamus cells that they have, they have recorded and observed the experimental study.  

Mei    00:07:55    So that’s, uh, why, like we discussed that my BIA, they interesting channels. So you introduce more biologically detailed monthly work. And at that time I was using another similar Collin your own simulator. And we kind of, uh, played with the redefine mechanisms, including our channel kinetics and swell as neuromodulators using, uh, the malt files of the neuron simulator. And we actually saw a big difference when we had these mechanisms really implemented in details or part partly, you know, impaired. So, and also I was really able to replicate this visual perception, phenomenon we have observed, and, you know, we’re taking Rosales and a principle relay cells biologically. So that’s how I got interested in the whole thing, because, you know, having these mechanisms, we can really introduce, uh, the big difference in the neuronal properties, as well as the network properties and also the connectivity. You need two neurons.  

Mei    00:08:46    And I think this is a big drive for me. And, um, because I have also seen other approaches and, you know, competition on your size that are, they’re using primarily point neurons and your stability after same thing, we see big differences and also in deep learning is more, even more abstract. We have, you know, unit unit units as neurons that are storing for the era and for the backpropagation. So I think, um, after seeing all these different approaches about you and more interested in you, how, you know, we can add this, um, biological realism to this model to make a difference.  

Paul    00:09:18    So in, in some sense, both of your background work has focused on, uh, more or less a bottom up approach, right. Um, building all the details and run the simulation and kind of see what happens. But, uh, deep learning networks is a different piece, right? It’s, it’s very much a fit, whatever model you do build however much biological detail there is fit it to some behavior based on some behavior, which is in essence, a top down approach. Um, and you know, I know like one of the criticisms, at least of the blue brain project is that this bottom up approach, um, of just building all the details, turning it on, running it and seeing what happens is some is like fundamentally flawed. Right. But, um, but I know that you both appreciate both the bottom-up and top-down approach. So, uh, how do you guys think about bottom up versus top down and Sri? I don’t know if you want to just respond to like that, that kind of criticism of blue brain in particular, but just in the larger picture bottom-up versus top-down  

Sri    00:10:20    Right. I think what this convergence of bottom up for Stockton buys us is, uh, so-so traditionally all these, uh, uh, top down approaches, um, have been, um, uh, a bit ad hoc because they’ve always been built to reproduce a very specific phenomenon, right? So, um, the stop loss models are great at reproducing something specific, but they probably can’t reproduce something that’s a lot more gender. Uh, whereas on the other hand, uh, what a bottom up approach, um, uh, affords us is, is, is a, is a framework or other or reference, um, that could be used, uh, to try and understand what level of detail is required to model what phenomenon, and then, uh, based on what level of detail is required. I mean, you, you, you, you could then, um, adjust, just shave off these, uh, details and then only focus on what matters, uh, to contribute to a certain phenomenon.  

Sri    00:11:28    Um, so in some sense, I, I think, uh, a purely top-down, uh, a purely bottom up approach, sorry, it’s, it’s really, um, a beacon, um, a guide to try and tell, uh, um, top down approaches on, on what details matter and what could be included in order to reproduce a specific phenomenon, because otherwise, I think if these two approaches don’t come together, I think it’s, it’s, it’s, it’s a bit of a groping in the dark because indeed, uh, top-down models can tell us a lot about what’s happening, but they might not really be able to pinpoint what are the final details, the cellular and the cyanotic details, for example, that are responsible to, to shape a certain phenomenon. So, so this is where I think the convergence of a bottom up and top down approaches is very crucial.  

Mei    00:12:25    Yeah. It’s almost like people, mostly people will say there is actually a clear dichotomy between top down, bottom up. But, um, I mean, I was seeing people really driving through, um, like the, this, this research using a hybrid approach, for example, this has been using the, the philosophy of a competition, your size. Uh, and, uh, one of the example I’ve seen is a LANDESK textbooks published in 1999, I think. And I think he was trying to work on a grow, sell principle, really. So, uh, again like the fat cells and a thing he was doing is at, he has really detailed morphological response and reconstruction with the cell. So up to more than two to two Hendrick apartments within a cell, and he also wanted to see what the level of detail he required to replicate the sensory phenomena. And he also tried to reduce the amount of compartments in his model to, I think, around 70 or three or 10 or something.  

Mei    00:13:17    So I think also within the computational neuroscience community, there are a lot of people that are interested in try to find as a more, you know, girls details that are required to replicated without going into too much detail, because otherwise we may require a lot of computation, computational power to really have a model running, especially, you know, considering that the brain has a great amount of neurons. And, uh, if we all incorporate this kind of level of details into that is going gonna, uh, you know, it’s gonna lead to and what is simulations. Yeah. And in the meantime, I think what’s interesting about the current transient, the deep learning community is that we have more and more word coming in the directions of having a hybrid model in between these two approaches. For example, we have, uh, I think from, uh, Blake creatures lab in 2017 may, or we have there, they have had a deep neural network.  

Mei    00:14:05    And, but for the single units, they have tried to replicate a compartmental setting. So they had a, um, ethical dendrite instead of just a single neuron. And they didn’t. In the meantime, we have also seen a work from DeepMind where they tried to capture, you know, uh, the, the fairing of these TSLs cells or the activity in itself throughout the field when the agents navigating. And so I think we have, uh, gradually like a more recent appreciation of this hybrid approach by really having some top-down, you know, approach. And we tried to add more finer details to see ambassador can do something different. Yeah.  

Paul    00:14:42    Yeah. I mean, th so thinking back to the early days of neural networks, right? I mean, in one sense that was bottom up because, uh, it was saying, well, the brain is made up of neurons, so let’s make a network of neurons and then kind of go from there. But of course, then the training, you know, to do tasks is, is the more top down approach. But so we’ll talk about neuromodulators and you give examples, uh, in the paper of various, uh, neural networks incorporating neuromodulation into the networks. And then of course, there’s a, there’s a call for more of that. And may you’re talking about, uh, you know, even using like dendritic computation to build that into neural network models. And of course there are even neural network models with glial cells and they’re, you know, functional roles. So in some sense, it’s, um, well, I don’t know if it’s a small trend or in some sense, it’s a, it’s a trend to see if these different kinds of detailed uh microcircuitry and, you know, inductive biases, dendritic, computation neuromodulation, in some sense, there, there is, uh, a push to start building these things into the models, but like you said, it, it becomes intractable computationally.  

Paul    00:15:49    So my question, I suppose, is, uh, you know, what is the right level of abstraction? And you might just echo what you just said, um, that we have to, you know, figure out what is needed to constrain. I mean, Sri, you were talking about the original, like modeling and the calcium, um, in, in the system and then having to change that. And then that’s what got you on to neuromodulators in the first place. But it’s almost overwhelming to think about the, the detail to add in, into a neural network, uh, computationally demanding li um, so how do we, how do we know what the right detail is?  

Sri    00:16:24    The so-called right level of detail or the correct level of detail really depends on the kind of questions you want to answer. Right. Uh, so Chris Elliot Smith, um, uh, wrote this review, uh, many years ago, uh, if I remember correctly, uh, the title of this review is the use and abuse of large scale, uh, neural network models, something along these lines, right? So it really boils down to the kind of question you want to pass. So if, for example, I would like to know, uh, the role of <inaudible> in modulating, gamma oscillations in the neocortex, then I better monitor them. Otherwise, I want to know what these standards are doing, right. So it really boils down to what kind of question you want to answer, and yes, sure. Adding all this detail, my team daunting, uh, but, uh, to a certain extent, it’s, it’s not, it’s not about this all encompassing model.  

Sri    00:17:26    That’s like a silver bullet, but it’s more of like a framework that lets you add the kind of detail you would like to. And then if, um, at the end of the day, you realize that this detail is not letting you, uh, answer the kind of questions you want to then sure. Feel free to, to, uh, to, uh, isolate this level of detail and then see, uh, what really is contributing to the kind of phenomenon you’d like to explore. Uh, so in that sense, I think, um, detail is not daunting. Uh, it is necessary. Uh, but what we need to understand is, is how to incorporate this in, in, in, in the most economical way possible, because adding more detail, of course, as may said, is implies a lot more computational power. Uh, so, um, uh, that’s the need to be judicious about adding detail, but on the other hand, this detail is there for a certain reason. We don’t know what the reason is. So unless we have some room, some, some, some place, some place holder to include this detail, um, uh, we never know what this details are actually contributing to what we want, what we’d like to study. So, um, um, I’d again, like to reiterate on the fact that, uh, the level of detail depends on the kind of questions you’d like to, to answer.  

Mei    00:18:50    And then I, I feel that in part from the question itself also depends on the, uh, I mean, there are some other factors that are more in a realistic, uh, for example, the budget, you have the duration of the project and also computational power you have. So like sometimes we are bound to be model of machines we have, and we cannot really go into any further details. And, uh, I think there is also like something addition to, to in addition to this point, is that, um, like, uh, in some cases, uh, the brain’s energy consumption is actually want each of question. So it automatically optimizes itself in some way. So it will optimize the cost function or optimize the murals live the way the neurons are communicated and, uh, like how the connectivity is enhanced or reduced, you know, certain area and uses sleeps and rice six cycles to reset the, uh, some certain states of the brain. And I think these are quite interesting as well because there are actually some active neuroscience research studying why to bring a soul energy, you know, efficient. And, uh, this is something we can also try to think, you know, from a perspective in the way that we can maybe potentially tackle this highly computationally costly problem that we have in, in deep neural networks and also biological, biologically detailed neural networks. Yeah.  

Sri    00:20:07    Uh, like me, but, but yes, I mean, resources, that’s something. Uh, but, but on the other hand, let’s assume you had in finite resources. Right. So, so what would you then do, like, like where would you stop, uh, to add all these details? Right. So, so that’s, that’s another thing, like you go down to the level of, of, of, of, uh, proteins or single genes, like, like what would you do  

Mei    00:20:32    I agree with what you said. Yeah. Yeah. So the question of matters.  

Sri    00:20:36    Uh, right. So, so I mean, ultimately I think to some extent it boils down to, to what you’d like to answer. Um, so, so sure. Uh, the energy consumption part that’s indeed very important. Um, but, um, that also boils down to what computing resources we have. I mean, whatever computing architectures we have right now are, are quite energy greedy. So at the end of the day, perhaps to, to, to make these, uh, computing architectures also, um, more energy efficient, we would need to understand how the brain is implementing this energy efficiency and to do that, perhaps we need to build models off of some level of detail. Otherwise we probably will know why the brains, uh, so energy efficient. Right. So I think it’s, it’s, it’s a bit of a chicken and an egg problem in, in, in, in my opinion. So,  

Mei    00:21:32    So I personally prefer that if I have sufficient details that make something work, I won’t really go into any more details unless the details are the things I want to just study. So what I think, I think Sheree is, um, is right on the point that it depends on question. I wanted to answer if I wanted to replicate a cognitive processes or phenomena. So in that case, if the details I have are sufficient, maybe I’m going to just try, adding more detail to, to try to play with that. Yeah. Like, but if I wanted to study the details themselves, I definitely want to go into another level of details. So it depends. Yeah. And also there’s really no right or wrong level of details because we also, like, maybe I’ll say that all the models are wrong, but some are useful. So, you know, in that way we, I think the right level, it depends, it really depends on the question you wanted your answer. If this is a thought experiment that we have a limited power and computational resources. So I think that’s a, that’s the point of, uh, you know, the first thing you want to define as a question you want to answer, and then what other models you can potentially use to answer this question. And then like, what is the main, most simplistic model you can, you can try to go from and what is the detail level of detail we want to stop with maybe so that’s how I would frame it.  

Paul    00:22:46    Yeah. So may you just said, um, the right level of detail is the right, is the level of detail that makes it work. And Sri, when you were talking, you alluded to Chris Elias Smith, who’s been on the podcast, um, and, and his, his viewpoint. And one of the things that actually Chris Elias Smith, uh, focuses on is for his purposes, what makes it work is if it matches behavior, right? So he has this cognitive architecture spawn. Um, and in, in the paper that you guys wrote, you go through a lot of, um, uh, literature detail, uh, in neuromodulators and how they’re related to our cognitive functions and what we know about how they interact and the different mappings of a particular neuromodulator onto various cognitive functions. Do we know enough about, um, so I, you know, for example, like dopamine, right? So the classic reward, prediction, error, that’s what it is, except that’s also, it also plays like 17 other different roles that are being discovered, uh, all the time. Right. And then you have on top of dopamine, you have all the other, uh, neuro neuro modulators. So for instance, Sri, I know you’re interested in, in histamine specifically, um, among others. Do we know about the relation between neuromodulation and, uh, cognitive functions and behaviors and how the neuromodulators interact, or is that part of the project to learn more about these things by developing them into deep neural networks?  

Sri    00:24:15    Okay. Uh, I think it’s a, it’s a bit of both, uh, because, uh, we definitely are, are really at the tip of the iceberg when it comes to neuromodulator function. So, so what the, uh, so-so what these neuromodulators are probably doing is analog is to, to a relay rates, right? So, so one neuromodulator sets the stage for another takeover. So one neuromodulator passes the Baton onto another Euro modulator for takeover to then, um, uh, capitalize on a platform that’s already built to then prime the brain to do something right. So, so neuromodulatory function depends on brain state, um, and for the brain to reach a particular state, it needs to activate certain neuromodulators, right. A combination or whatever. So it’s really the synergistic effect that’s going on throughout, uh, where the pain needs to get to a certain state and you can’t get there without neuromodulators.  

Sri    00:25:15    Um, and these neuromodulators are constantly priming the brain to get to a particular state like, so, so in this way, it’s, it’s, it’s a very synergistic problem. Um, and we are only beginning to appreciate, uh, what these permutations and combinations in terms of neuromodulator effects are. But a lot of this is being studied at the global level, at the behavioral level. So why we appreciate that a certain neuromodulator like dopamine, as you said, which is probably doing 17 different things. In addition to, uh, to reinforcement or reward prediction error, there are also other neuromodulators that are very likely causing the dopaminergic system to, to, to execute these functions. Right. So, so we like studying what’s happening at the global level, but there’s a lot, lot more that’s happening at the local level, because there are a lot of different cell types. Like let’s just take the example of the new cortex.  

Sri    00:26:19    Right. Um, so, um, uh, I’ve uh, tried to it, uh, in collaboration with the blueprint project, um, uh, a little crumb of the somatosensory cortex, that’s roughly the size of a grain of sand, right? About open three millimeter cube of cortical tissue. So this little crumble articular tissue has about 55 different morphological types. Uh, so common editorially speaking, 55 different morphological types, assuming all to all connectivity, uh, would lead to 55 square or 3025 possible synaptic connection types. But, uh, we know again by virtue of actual identity geometry, that only about 2000 or so of these, uh, 3025 combinations are biologically viable because the excellence of some cell types cannot reach the, the 10 rights of others. <inaudible> right. Uh, so over the past 50 years or so, we’ve been doing a wholesale recordings of the synoptic connections. We only have a quantitative data on 20 or so of these sign optic connection types.  

Sri    00:27:33    So that’s less than 1%, right. Of the 2000 or so biologically viable connection types and of these 20 or so that we have data on. We probably know what a neuromodulator like acetylcholine might be doing to less than 10 of these combinations. So the data on the effects of neuromodulators on cyanotic transmission alone is extremely sparse. So we just don’t know what’s happening at the local level. So the need of the art, um, to try and understand neuromodulation is to build these bridges between what’s happening locally at the cellular and cyanotic level, and then try and connect this to what could be happening at the global behavioral level, but we are not there. So, so of course, uh, this, uh, this asks the question, is there any incentive in doing all these 2000 or so, uh, wholesale recordings and applying <inaudible> there probably isn’t. So we need to come up with clever ways to try and understand some, some, some predictive patterns on, on, on, on what could be happening, um, at the cellular and cyanotic level due to a single neuromodulator.  

Sri    00:28:45    Like I said, I call it like, it could be that pyramided cells all have, um, um, uh, similar receptors expressed on their dendrites for acetylcholine. So, so perhaps as a guy, Coleen could be having the same kind of effects, we don’t know. So, so unless we break down this problem into something a bit more tractable, it’s going to be a real challenge to understand what these neuromodulators are doing across cell types, even in a small part of the brain. Um, so, so I think the need of the artists to really try and come up with approaches to bridge levels of detail, to try and understand how the local modulation, the local effect of neuromodulators translates into, into, uh, something more global, something at the behavioral level.  

Mei    00:29:35    Yeah. So, uh, I think what’s interesting about the neuromodulators are, is that in her life, a lot of more than technological approaches, allowing for different explorations from the, um, in-vitro and his local perspectives. So according to what Shree said, um, like there are many kinds of interactions across the types of neuromodulators, for example, as he mentioned, uh, you know, don’t, men can only achieve his goals, um, you know, ender the modulation of other neuromodulators, such as, uh, I think acetylcholine as well as serotonin, because there are opponents existing between serotonin. And that don’t mean the way they’re accounting for conifer, um, punishment and rewards respectively. And, uh, it also has a lot of implicate implications in, in the cognitive disorders. And, uh, of course, another kind of a principle example of culinary logic, uh, regulation of, uh, dopaminergic cell activities within the basal ganglia that has a lot to do with the movement initiation execution in, in Parkinson’s disease.  

Mei    00:30:34    So, um, I mean, I think what’s interesting to us is that we have seen these on your module, two processes, all the different spatial temporal scales, and these are from one aspect. These are not so finely defined in deep neural networks. So this is something we can try to incorporate. And there are a lot of habitats on that, uh, computationally and experimentally. And, uh, of course we come into neurosciences, we do, we use different approaches, and of course we can try to work with deep, deep learning practitioners to have a hybrid approach between biologically detailed modeling and deep neural networks. And the second thing is at, um, we are not only using competition models to kind of wrap the KT, we, but also wanted to use these models to have a better, you know, refined the computational hypothesis on how these areas might be interrupting each other.  

Mei    00:31:24    For example, if you’re, you know, if you’re trying to use an in-vivo approach to kind of a patch clamp, some cells in a certain brain area, for example, like the midbrain nucleus, such as, um, you know, the basal ganglia straighten for example, and these are some areas that are really hard to reach using the traditional experimental approaches. So, yeah, I was really fortunate when I was working at, uh, Gili Silverberg his lab, uh, in 2015. And I’ve noticed I have kind of as a first patch clamping status Sasha in the word, um, Nordic neurons, the basal ganglia. So, I mean, yeah, it’s really nice for me. I was working with our mom and it really experienced postdoctoral researcher at that time. So, I mean, for these areas of my point is that is really hard Explorer, you know? Uh <inaudible> and, uh, we used a vague behaving animals and these experiments are really hard to conduct because the animals are kind of up performing and behaving all the time, uh, after they receive is for example, like sensor symbols from the miss curves or stuff. And also like these,  

Paul    00:32:23    These are patch, clamps awake behaving. Okay. Cause I’m going to ask you a technical question in a second about that. Okay, go ahead. Sorry.  

Mei    00:32:30    And of Zen, like these are areas of hard to, to patch from, and sometimes we have more two neurons per day. That’s already good enough. And, uh, so definitely if we can kind of replicate certain experimental conditions in, in competition models, that’s going to help us have better competitional, you know, um, hypotheses and that can be tacit using experiments further. So I think these experimental approaches and computation modeling really goes into psycho and promote yet in a certain way, because we have better experimental results. We have more biologic constraints in modeling, and if we have better models, we may have better hypotheses on what the biologists doing. So I think this is a quite, um, we are not only using this to really make better models, but obviously we also use mechanistic models to create better health policies. So that’s what I’ve wanted to say here. Yeah.  

Paul    00:33:18    Okay. So, uh, one of the drives for this, right? So from what you guys are saying, one of the drivers for building in a neuromodulator, um, at multi-scale multiple different scales and in multiple levels of, of detail, building them into deep learning models is to help us understand actually what’s going on in the brain. Right. And that’s a lot of what we talk about on the podcast is using deep learning AI methods to help us understand what’s going on in the brain. But as, as I was reading your review, I thought the other way to go would be like, why not? Um, invent new neuromodulators for deep learning, uh, models and to see what they can do, right? Like why not? Instead of constraining the neuromodulators thinking will acetylcholine does this in the brain. So let’s build it in like that in a deep learning model, why not invent a new neuromodulator, uh, that you somehow fit in the model, right. If we did that, would it come out that it would behave like a subtle Coleen and then acetylcholine and histamine would interact the same way? Would we find that they were interacted the same way? Or would there be other emergent properties that are good for, uh, the way that the models function? Sorry, it’s kind of a out there question.  

Sri    00:34:31    Uh, well, uh, that, that’s an interesting question, I think, uh, and, uh, so, uh, right now, uh, I don’t think, uh, there are deep network models that try and incorporate, uh, mechanisms of specific neuromodulators, uh, perhaps in reinforcement learning, but they’re all focused on dopamine,  

Paul    00:34:53    But that’s what you’re, that’s what you’ve sort of proposed right in the paper.  

Mei    00:34:57    Exactly. Exactly. Yeah. There’s a different story for new robotics because, um, if you consider a deep learning, so it’s primarily people that are interested by interested in dope men. And if you go to a bit of fun near robotics, you know, uh, research, I mean, there are like researcher invite a certain, uh, neuromodulators, for example, to study stem by the, uh, by Krish Mar. And he has actually proposed different roles of individual modulators and try to implement that all in one of the robot models. So I think all the different fields that take a different approach and definitely we have seen more natural interest in your forcement learning regarding development. And, uh, I think one recent proposal by, uh, Witchita from Harvard is at, we can try to incorporate this acetylcholine in parallel adult, maybe cause these two interact a lot. So yeah, we have seen some interest in that field, but, uh, I think according to our knowledge, our papers kind of actually kind of the first one to kind of try to push people to pay more attention to other neuromodulators, even like apart from the ones we mentioned in the paper.  

Mei    00:35:58    Yeah. Also there’s histamine, there is neuropeptides as well. So, um, yeah, I think that’s a, that’s quite important. And also you mentioned like we may use another neuromodulator that’s not like a biologic and you’re modeling in the brain, I think, uh, I’m not sure about that, but I think there are some commentary facts achieved by having these new modulators of watching, working to other your brain. So maybe like the ne the novel new modulator proposing is actually like, uh, Lee faxes equals what we have when the new modulators are working in combination. Right. So that’s a, that’s a, that’s a possibility,  

Paul    00:36:30    But then it would be redundant because then if you’re also modeling the two neuromodulators that then have that affects when they’re working together, then why at a third, I suppose.  

Sri    00:36:40    Right. But, but, but then this, this just reminds me of a, of a, of an ingenious, uh, study where they invented, uh, a neuromodulator and they called it back prop me, um,  

Paul    00:36:54    Uh,  

Sri    00:36:55    Right. Uh, but, but, but somehow the role of back properly as it’s pretty self-explanatory was kind of limited to backpropagation, which is interesting in itself, but, um, lean  

Paul    00:37:08    For, it was invented for backpropagation.  

Sri    00:37:10    Yes, you’re right. Yes. So it was, it was, it was invented for backpropagation. Uh, but, but, but indeed, um, um, uh, I mean, uh, that there’s clear evidence, um, that, that, depending on the, on the kind of neuron and the brain region, um, um, and the neuromodulator, so, so this is like really, uh, you know, like, uh, like a 3d problem, a neuromodulator the brain region and the cell type that, um, of course, uh, the, the definition of, of backpropagation in machine learning and, uh, the back propagating action potential is a bit different, but nevertheless, um, uh, uh, these neuromodulators seem to have very specific efforts in actually modulating the so-called back repeating action potential in itself. Um, so, so, so yes, I mean, um, I mean that, that there could be completely new, um, uh, functions for neuromodulators in, in only modulating the so-called back propagating action potential itself. So, so perhaps a back, uh, a fictitious neuromodulator like back broken mean, uh, could, could actually, um, um, um, uh, tell us something very new. Uh, so indeed yes, there could, there could be a new neuromodulator B we have idea about  

Mei    00:38:35    Yes. Okay. But to add more one or two sentences for that study called back problem. And naturally they were partly spared by the heavy learning, you know, when they were trying to be with the connectivity in between your rooms. And also they had a kind of a retroactive version of, uh, in your modulation where you use actually like dope mini spared mechanisms for the change of the weight. So I think it’s also in all of this sounded like something totally novel, but actually it’s a hazards. Andrew was saying, you know, as far as heavy learning, so yeah. It’s not like a hundred percent novel or makeshift or made up neuromodulator. Yeah. But it was actually not what worked, I can paint and go combining the fact as the connected level, as well as at the neural level. So I think that’s quite interesting study,  

Paul    00:39:20    But, but what you’re proposing in the paper and please correct me if I’m wrong is, um, using what we do know about neuromodulators and their role in cognitive functions to build in kind of to constrain the models in that way.  

Sri    00:39:34    Uh, indeed. That’s correct. Um, so as the title of the paper itself says, so the goal is to try and build neuromodulation aware, deep networks that are informed by principles of neuromodulation simply because there is no one size fits all for neuromodulatory function. And it really depends on the brain region, the neuromodulator and the cell type also possibly the, um, the, uh, the domain, um, in, in a particular cell type, because there could be different receptors that are expressed in dendrites and axons. So, so indeed, yes, this is, this is really the proposal, um, the, the grant proposal and, um, uh, this is the direction that, uh, we would, uh, we would like to like to pursue.  

Mei    00:40:24    Yeah, I mean, and also this is, um, I think this is pretty much like a demonstration of what neuromodulators and the bring and doing. And, uh, and we are aware of the, some of the problems we deep learning committees trying to solve. And like, these are problems that are tackled by the brain using this kind of, uh, adaptive design by the new motto tree system. So we think we can maybe like provoke some thinking these new directions and maybe leave, you can showcase, okay, this is how the brain does it by having a <inaudible> system. I mean, this is like not a unified solution to all your problems that these are like how they’re tackling specific individual problems, you know, using all these systems, the processes. And we wanted to kind of, uh, attract people and make a, maybe ponder a bit more within this field. Yeah.  

Paul    00:41:10    All right. So here’s a, an unfair question. I want to ask both of you. So it’s two questions in parallel on a scale of zero to 100 zero being like nothing and 100 being solved or perfect knowledge or something. So the first question is where are we in our understanding of biological neuromodulators, uh, and their interactions in the whole world and how their roles, right. Zero to 100. Uh, and of course there’s a much longer history. And then the second question is where are we? Zero to 100 in terms of exploiting the potential of neuromodulators in deep neural networks.  

Sri    00:41:47    Okay. Uh, I probably take the first part. So where are we in terms of understanding a neuromodulatory function, uh, the role of biological neuromodulation on a scale of zero to a hundred, um, per se, I I’ll be very generous. I’ll say three, maybe  

Paul    00:42:07    Three. Okay. Then that’s generous.  

Mei    00:42:09    Yeah, because the first step it takes you to know, like the big step is that, you know, what you’re, you don’t know. So we don’t know what we don’t know. And that’s, that’s what the biggest seizure here. Right. So we don’t know what’s a brand, or if there’s this whole field and it’s making us even more, uh, and also confident when we were faced by this quantify this kind of questions,  

Paul    00:42:26    Would you go higher or lower than three?  

Mei    00:42:28    Oh, I initially thought it’s around 10 or something, but it says for you at step three, and he has a, you know, definitely a more say in the ideal when it comes to experimental studies on wash Alation. So, I mean, I’m going to take it down to five or, yeah.  

Sri    00:42:46    Um, and, and so, so just to add to my choice of three, so three is simply because, um, uh, I mean, we, we’ve always been studying neuromodulators mostly or other, the function of neuromodulators as though, uh, it’s always a single neuromodulator, that’s doing something, uh, primarily because a lot of this knowledge comes from, um, slice physiology, uh, where, um, the approach has always been to apply a certain, I’m going to stir the pot and look at what’s what’s happening at the level of cells and signups as do to you to a particular neuromodulatory agonist. But this is clearly not how things are implemented in the intact brain, right? Like, as you said, um, upon, uh, there are all these interactions and, uh, we simply have no idea to try and quantify these interactions. Of course, now we slowly beginning to have the tools where it’s possible to like genetically or, or other express, I don’t know, like, um, um, ways to, um, monitor, uh, to simultaneously monitor the function of multiple neuromodulators, uh, to, um, um, uh, genetically, um, um, uh, expressed, uh, encoders and things like that in neurons. But, uh, I mean, we’re still getting there and, uh, we, we, we, we definitely have a, a long way to, to actually, um, uh, try and understand how a neuromodulator interactions are actually shaping a brain function. So, um, hopefully, um, um, I’ll be a lot more gender, same say and predict that three could probably go to around five or six in the next few years. So,  

Paul    00:44:43    Well then, so the, the parallel question, and may, I’ll ask you to answer that this one first, right? Where are we? Zero to a hundred in terms of the potential of exploiting neuromodulation function in, in deep learning? Um, yeah. We’ll progress, I suppose. And the potential is what let’s say the maximum particularly.  

Mei    00:45:04    Yeah. I mean, the program is, is really, um, I would give it, uh,  

Paul    00:45:11    Something, something  

Mei    00:45:12    Inner five. Yeah. It’s definitely not exceeding 10 something under five. Yeah, because we have, uh, so if you read this a couple of papers using so-called our vision, your modulation, or deficient in your networks, they have the vines neuromodulation really lose term. So something that’s adaptive, adaptive, and self adjusting, self refining phase on feedback, or, you know, something that is locally applied to a certain subpopulation of your network. And this factor can kind of go adjusting itself is calling your modulation. So is a really loose definition. Opening modulation is kind of, I think if, I don’t know if this is being critical or something, but it’s definitely a fancy word to use by people. And of course they have demonstrated different behavior benefits and learning benefits from having more like an adaptive mechanism, your studies, but we really have a kind of a loose definition of when your modulation it’s, uh, not always, but a lot of plausible and they have been applied to your networks of relatively small scales.  

Mei    00:46:14    So for example, we have reviewed a couple of studies that needed paper as well. So we initially had a table dedicated to that. Yeah. But we found, um, so most studies we have, uh, observed in this field silent thing, it’s up to 10 or something. If you add new robotics going to be around a 1520, because we have more studies than actually in this, from the robotics side. So, but if we only discussing pure DNA and approach, I think mostly people will, uh, people were using these mechanisms a smaller network because they have a better see-through of what sits doing from the inside out. Yeah. And they were really applied to really tiny networks of, uh, sometimes naturally bullying naturally they were on a Hendron Euro. So what they were trying to observe what each neuron is doing under different, new modeled factors that are kind of a scaling of the connectivity between your ELs, uh, where they were trying to see if a simple network can adapt or killer and kind of are different association between stimulus and response.  

Mei    00:47:10    You know, if we add a richer number of neuro moderate factors. So I think from a scale of near natural behalf, we have been using to investigate artificial in your modulation is kind of, uh, um, as kind of a homogenous I solve is mostly a small network. And we have only seen more two studies applying this kind of a thing to a larger network from the connectivity level. For example, the McCone paper is actually the back problem in thing tree discussed. And, uh, and also from implementation level, there are like green implementing all different ways. For example, it can be represented by a hyper network that takes input from the deep neural networks and computes the, uh, hyper parameters and wanna to modify applying each, um, I’ll standpoint, uh, or it can be represented as a, uh, uh, novel activation function, or it can be represented as a scaling factors that are applied to the whole population or lapping on overlapping subpopulations.  

Mei    00:48:05    So again, see, there’s a really wide loose definition when you were modulation and people are trying to inject this into and your networks in different ways. So we all have different takes on that. And then I hope there’s a chance we can, you know, having neuroscientist and deep learning practice and work together to have something that’s more detailed, that’s more in between. And we can task those seeing kind of a different architectures. And I think this is something interesting rush I’m currently probing ad master, and want to just see more of these work coming instead of having a really lucid, deaf defined and, uh, heterogeneous kind of, uh, object Gibson studies. But of course, of having different objectives saying that we have observed the benefits at different levels from different aspects. And that’s also why interesting, I think of the interesting thing of current, I’m going to studies in this field,  

Paul    00:48:54    Uh, very good three hour. You need to say your number on the scale. And then, uh, because I, I wanted to respond to that, but I, I’ve got to make you, uh, say your number as well. And, and the Y  

Sri    00:49:06    Yeah, I, I would still say, uh, yeah, it’s, it’s, it’s certainly much below 10. Uh, let me stick to five now, or maybe even lower than that because, um, yes, as, as, as may said, uh, I think, uh, indeed, uh, so it’s not like there haven’t been any attempts to include, uh, neuromodulation in deep networks. Uh, but on the other hand, um, as I mentioned before, um, it really boils down, uh, to what element of neuromodulation, uh, that one would like to include in these deep neural networks, because it, again, really depends on the brain region, the cell type and the neuromodulator, and right now, um, uh, we, we still at least, uh, in, in terms of implementing these details in deep neural networks, we definitely not dead yet because of course, uh, we, we are, we are of course, very handicapped by, uh, the biological knowledge itself.  

Sri    00:50:10    Um, and, and, uh, it’s, it’s another process to try and transform this biological knowledge into how we could implement, uh, whatever into deep networks. Uh, but there are, there are some, some, some very global analogies, um, as, as, as may said, um, um, like, uh, Kenji toy us, uh, work from, uh, the early two thousands has always, um, uh, been very illuminating in, in setting the stage, uh, to, to kind of confer a specific hat to each of the so-called, um, uh, big five, or rather would I say a big four neuromodulators, like Coleen, noradrenaline, dopamine, and serotonin. So, um, um, uh, again, uh, indeed Paul, as you, as you mentioned before, um, I’m currently studying the role of histamine because I think, uh, it’s, it’s very underappreciated, uh, and has only been restricted, uh, to its role in regulating sleep and wakefulness, but, uh, I believe there’s a lot more histamine could do, therefore I say the big five, uh, but, but we know a lot more about the role of the, of the, or rather the roles of the big four.  

Sri    00:51:25    Um, and, and, uh, we’ve been a pretty good at assigning a specific heart to work. Each of these big four neuromodulators are doing, uh, for example, acetylcholine in terms of, um, uh, gain normalization are dopamine in terms of reward, prediction, error, and, and more recently serotonin and things like that. But, uh, uh, again, uh, we, we really need ways to, to, to, to try and, and, uh, identify the nuances of how these neuromodulators operate and their interactions indeed, uh, because they don’t operate in isolation, they always interact, but the interaction bit is, is a little mask, and we need to understand how they interact and, and hopefully come up with, with, uh, with ways to, to, um, encapsulate and formalize the ways these neuromodulators interact into, into beat new networks. Um, so, so currently, um, we don’t have that because we’ve only, uh, looked at, uh, neuromodulation as one global Ireland capacity, silver bullet, but this is clearly not the case. So we need ways to, uh, to try and model the spatial potent specificities of neuromodulatory action, uh, into, into deep networks.  

Mei    00:52:47    Yeah. So on top of that, there are also like a lot of technological changes seen in both fields, and we have been observing different things, thanks to them. And the means like when you’re trying to model after certain aspect or such a neuromodulator, we always update our model based on these new studies, right? For example, we have noradrenaline, which is traditionally thought, that’s only, um, having a really homogenous distribution as well as function. But now recently, thanks to optogenetics, we have, uh, observed a really contact dependent and also projection dependent functions of those new modulators. So definitely wanted to update our model based on what the experiments suggest, and which means we, we, we will never go there, but we will never be there, but we will get better and better. Yeah.  

Paul    00:53:31    Hm. Well, may I wanted to say, cause you, you mentioned, uh, networks of like a hundred neurons and one of the reasons why I wanted to have you guys on was because on the previous episode I had Eve martyr on and she’s famous for working on this approximately 30 neuron system in the lobster, stomata, gastric nervous system and crab. And, you know, one of the things that she’s famous for is being, uh, struck and worried that because the resilience of these networks under different sets of, uh, parameters essentially, right? So you could throw in terms of neuromodulators and she doesn’t specifically only focus on neuromodulators, but if you had a high concentration of noradrenaline or something, you might still get a nice rhythmic, uh, um, behavior from the system could produce nice oscillations, right? So there’s, it’s an assault oscillatory system. Uh, and then in, you know, the, the crab moves south a hundred yards or something, and then there’s like a lower concentration of potassium or something in the seawater.  

Paul    00:54:35    And still you get these rhythm rhythmic concentrations, which goes to show how resilient even small networks, uh, can be. But also the worrying part is how to understand what those networks are doing. So, um, because there are so many different parameters that work and it’s hard to crash the system. So then, you know, thinking about scaling up to such a massive, uh, neural at such a massive neural network level, that that actually seems then, you know, even more daunting. So how do you, cause I know you’re familiar with Eve martyr’s work. How do you think about that smaller scale and how much we still don’t know about that versus scaling it up to something much larger?  

Sri    00:55:14    Uh, so, so indeed, I mean, um, so, so 1, 1, 1 statement that, that, uh, Eve made many years ago at, at, uh, at a garden research conference has always stuck with me and, uh, was also, um, a factor, uh, to kind of motivate me, uh, to look at, uh, neuromodulator function. So even say, um, that the connectome is absolutely necessary, but completely insufficient to understand the brain. So what you probably meant there was that yes. I mean, that’s the anatomical wiring diagram, the blueprint that’s kind of carved in stone or not entirely, but this provides a set of instructions to the network. But what neuromodulators do is to act on top of these set of instructions to then confer a lot more flexibility, uh, to expand the set of instructions into a set of options. And indeed in some sense, um, uh, looking at the architecture of deep neural networks, I mean, they’re, they’re layered configuration and stuff like yes, so they aren’t structured in a way, uh, to, to work with the set of options, but how exactly, um, they hold around these options is if there’s something we don’t know.  

Sri    00:56:37    And again, coming back to your question on, on complexity, uh, sure. Yes, we still don’t know how this small network of 30 neurons, uh, or, or other, uh, the output of this small network of 30 neurons is going to be the same, regardless of a lot of, um, uh, stuff that’s going on, uh, deep down under. So how local components are being constantly reconfigured to, to end up with, with the same kind of result. Um, and indeed, um, so, so this is, uh, also something that, uh, I’ve looked into, um, um, as, as, as part of, uh, my own stint, um, in the blue brain project and this Combinator explosion is, is indeed, um, um, a problem. Um, but of course, um, so, um, uh, what we still, uh, don’t know is how to relate neuromodulator function and mapping to a certain brain state because, um, as I, as I mentioned before, I think it’s, it’s, uh, it’s, it’s a synergistic role play in that, uh, getting to a certain brain state and demand, um, a certain, uh, configuration of neuromodulators to, to, uh, to act, uh, and, um, the brain can only get to a certain state if this combination of neuromodulators are activated.  

Sri    00:58:06    Uh, so I think if we could try and understand, uh, what are the specific neuromodulators that could, uh, actually function, uh, to get to a certain brain state, um, and, and how do they act on certain cells and cyanotic connections, then I think this problem could be, uh, a little more tractable, uh, and of course, to add to this, um, the, the whole new field of transcriptomics, uh, is, is actually a very good ally in trying to, uh, help us understand how this could work, uh, because it could be that, um, there are, uh, certain combinations of neuromodulatory receptors that are expressed in, in, in some cell types, you know, and I’m looking at what are these, uh, receptor combinations and linking these receptor combinations to different neuromodulators? Uh, I think we could then try and predict how, um, uh, combinations of neuromodulators could, could actually act on, on, on the expression of certain receptors to help us, uh, or to help the brain get to a certain, um, uh, network state.  

Sri    00:59:19    And therefore we have to study all possible neuromodulators, but just trying to understand, uh, the, um, uh, the expression of certain receptors, how they correlate with the expression, um, of, of, uh, receptor types of other neuromodulators. And if these are all expressed in high levels, then it could be that, um, um, these neuromodulators that are acting on these different receptors, uh, uh, are, are having a similar function. Uh, so, so I think these are some of the ways that we could try, um, and, and, and harness, uh, to, to really break down this problem into something that’s a bit more tractable.  

Mei    01:00:01    Yeah. So apart from the bellows constraints, when we were working with DNS, and I think it’s quite important to understand what each layer, very various players playing a role. And I think, uh, the one way is also to kind of, uh, work with the, uh, deep learning community to have better visualization tools and something from your side. I think this is something the community is currently working on. So explainable AI, yeah and try to kind of break down the black balls and, and, um, for example, for example, so now I’m working with biologically plausible fairing of deep given your metrics. And so from this fair, and we can kind of have a more direct way to observe what’s going on. We in between, you know, uh, in neuronal connectivity and within each, uh, each neuron. So we simply kind of visualize all the fairing, uh, pattern of, uh, of, uh, of cells in the fully connected layer.  

Mei    01:00:51    And we try to see if, uh, let me try to play over the adaptive factor or the monetary factor within, you know, uh, these layers, how it’s gonna affect the fare and patterns of cells. And we have found something quite interesting that, uh, you know, the ferry is actually, um, factor dependent. So if we play with a hyper parameter a bit, we can kind of change the fairing of the cells and also some distribution, supple cells judging by their favorite pattern. So I think this is a really interesting observation and, uh, we should really push that. We have better resolution tools or, or computational tools to kind of understand what’s going on in between each shoulder and have this asset dynamics. And one more thing is that we can also use, um, you know, a shallower network at the beginning. And, uh, and also we can try to play with different architectures.  

Mei    01:01:39    And, um, how does interacts with the, in your monitoring systems? And I think being interested in a study done by the Colton colleague is at the robustness. So find your network for me having a modulation or not. So actually they found that, uh, as a network is more robust against a different architectures and to have new modulation. I think this is something quite interesting. I mean, uh, they may observe their robustness due to different factors. For example, like the internal working might be different, but they all go the same result, but this is something I think it’s a function of kind of a direction we can step into is, um, how your modulation is affecting the, the general output. And also in there working, you know, in addition to the output of the neural net for has had measured by accuracy or something in between, you know, um, like when you’re trying to prove this question and also in our pay period, propose to add, um, you know, in addition to the traditional global laws or something you want to have, especially in a feedback based learning or continuous learning, you want to have something in between, you know, we want to have a more direct feedback from, and then a task we’re like a smaller task related, uh, you know, learning or responses.  

Mei    01:02:46    And this way we can kind of capture it from a different spatial temporal scale. So this is more related to the temporal scale. Yeah. We can have a more like a frequent standpoint. You have the, the, you know, learning trajectory  

Paul    01:02:57    Going back to something that Theresa said about, uh, well, so, you know, Eve martyr and she actually used the same quote that structure is necessary, but not sufficient to explain function. Right. Um, I don’t remember if we actually spoke about this on the episode with her, but you know, one of the things that they found in simulating five, 5 million, I’m gonna make this up 5 million different simulations with different sets of parameters, right. That the actual space out of 5 million that were viable was pretty low, but as a percentage, but it was still like in the hundred thousands right. Of, uh, of parameter spaces that would work. So, yeah, I mean, it’s, it’s constraining, but, uh, it’s still quite daunting. So guys, I I’m, I’m, uh, aware that our time is, is, uh, pretty close here. I have a guest question, uh, from Mac shine. Um, and so he, so in the question, Sri, he’s talking to you, but, uh, you know, the question is, is for both of you, so I’m gonna play it through the phone here.  

Speaker 7    01:03:59    Hi, Sri, this is Mac. So we’re both really interested in how different arms of the ascending arousal system release neuromodulatory chemicals that change the firing rate and plasticity of areas of the brain, like the cerebral cortex and the thalamus. But I would say there’ll be take quite different approaches. I’ve been interested in trying to understand the basic principles underlying the interactions between the neuromodulatory system and the circuitry of the brain. Whereas you’ve really dove down into the details and embrace the complexity of an area like the cerebral cortex and all of the different types of cells that exist there. And I wonder if you have any ideas about how to join the dots here, how to extract basic principles from the brain, but also really understand the exquisite detail that’s present.  

Paul    01:04:43    Okay. So that, you know, we’ve already touched on this topic, um, throughout the, the conversation, but, uh, there, there’s the question,  

Sri    01:04:51    Right? Uh, so, uh, I think one way to do this is, is, is, is really to, um, uh, to harness this, uh, whole approach of optogenetic recordings in the, um, in the intact brain. Right. Uh, so now, um, uh, I mean, uh, we, we really have these very specific, uh, transgenic mice lines for these big five neuromodulators. Uh, there’s also a technique called, um, compound breeding. Uh, I’m just getting into the nitty gritties of, of these, uh, experimental details. Uh, but with these, uh, compound breeding techniques, um, I mean, I’m, I’m sure it’s, it’s, it’s very complicated, but in principle, assuming that it’s, it’s, it seamlessly possible, uh, so-so one could in fact try, uh, and, and, uh, genetically of course, label a certain, um, um, a neuromodulator projection, like let’s say histamine, right? And then that these, uh, histamine energetic projections to specific neuron pipes, let’s say in the, in the cerebral cortex.  

Sri    01:06:00    So that way, uh, it would enable, uh, trying to quantify, for example, uh, what’s going on, um, in a population of layer five pyramid, all cells in the neocortex upon activating a histamine, right? So that way you’re somehow trying to commit way in terms of, of, uh, uh, a top-down approach in that you’re trying to understand, uh, what’s happening to a population of neurons under a certain behavioral paradigm and how a certain neuromodulator is actually regulating this behavioral paradigm. Right. So that way we’re actually bringing these two approaches, like what I’m a fan of partly a bottom-up approach and what I know, um, um, uh, about Mac’s own work, um, in that he’s trying to build a mean feel moderates, a population level models of network activity. So I think, uh, that way we would really try and merge these two, uh, complementary perspectives to try and understand what a certain neuromodulator is doing to, um, a specific population of neurons under a behavioral task.  

Mei    01:07:27    Yeah, I mean, regarding the biological details, and there are many studies that are using a novel approach just like optogenetics, and think about the principle. Now, Alicia, a really important finding was reporting in science on comms to talk Nordic modulation on cell morphology, and really goes down to the final level of with dendritic spines. And this was this way. So the kind of, uh, kind of, uh, there’s a Jap as a group of Japanese researchers always study the stridor of medium spiny neurons and use I’m gonna really refined, uh, timescale swell. So they found that the, that the allergic regulation can kind of large the spine, uh, on, on the spine of his certain neurons within a short time window was 0.3 to two seconds. I think this is one prime example of using now, what technology is to really, you know, go into the biological detail on the smallest possible spatial scale we can have.  

Mei    01:08:18    And also like a lot of, uh, projections specific studies on the, the goals of, uh, for example, uh, colon Nordic sales are relying on the same techniques or sometimes two full-time caging as well. So two to four on microscopy going to come from a combination of this, with the optogenetics, which is also done by this was also done by this group of Japanese researchers, but also done liking other studies that are studying the projection specific functions of, uh, for example, noradrenergic systems, uh, such as, um, uh, like for example, where am I so did, so the are studies on nature communications, I think is a draw for our Johnson’s since lab, uh, at weekend. And, uh, they were trying to unlock the function of an <inaudible> system and there are different projections for different functions. So like it’s more like sensory perception as well as for your conditioning.  

Mei    01:09:05    So, uh, you know, this way we can’t really kind of, uh, more comprehensively understand the diverging functions, the same system we’ve been in the same year across different layers, if you have any on their projections and also kind of flame shrink asleep between, uh, you know, like the, the, uh, what are the say, like, uh, interactions across cells within the same microcircuits for example, within the basal ganglia, the D one D two a cells, as well as a collegic sales that are in regulating their behavior. So I think it’s definitely contributing to a more comprehensive understanding of what, uh, convergence diversions aren’t going at the local more global scale.  

Paul    01:09:40    So we’ve been talking about neuromodulators, uh, largely here, but may you interest, uh, you mentioned your interest in, uh, and, and, uh, you mentioned Blake Richard’s work, uh, incorporating like multi compartment models of units instead of just point units. And you guys talk about that in the paper, but you don’t go into as much detail as, as you do with the neuromodulator story. But, um, and what I was going to ask is how challenging it will be to. So my, my first thought was that it’s a lot more straightforward to incorporate, uh, dendrites let’s say, uh, because if you model them all at the same anyway, then it’s almost like just modeling another, um, computation, right? Like another activation function almost in just, just at higher scale. Um, but then you, you mentioned that, you know, if you’re gonna model like the size of the dendrites changing and there, um, I guess that’s in some sense, like modeling a weight change to a Dendright differently, right. Over the course of all the dendrites, but then neuromodulation with all the interactions. Um, it seems to me at first pass to be a much more complicated, uh, thing to model to figure out I’m just, yeah. Okay. So I’m wondering if, if you guys agree with that, you know, building in the dendrites, for example, or multi compartment models, if you feel like that’s a much more straightforward process than modeling in neuromodulation?  

Sri    01:11:08    Uh, well, uh, sure. That’s one way to do it. Uh, but, uh, there have been efforts to try and systematically collapse the biological detail into, into simpler models. Um, but maybe two ways of retaining this biological complexity, uh, through to some kind of activation functions, right. That, that, that could represent, uh, nonlinearities in dendrites are, could also represent, uh, cyanotic, um, uh, conductances on 10 rights, right. Uh, and then, uh, to account for the fact that these, um, uh, conduct tenses, uh, when they’re activated at the sign Arctic location on Denverite, but by the time they get to the Soma are, are severely attenuated, for example, in <inaudible> and things like that. Right. So, so there are ways to do this. So, uh, I think, uh, I mean, uh, a more efficient, uh, and a smart way to try and see how to incorporate, um, uh, neuromodulation into, into such a framework would be to, uh, to, to ride this bandwagon MC so how can we capitalize from, from this whole, um, infrastructure, uh, uh, that’s, that’s being developed to try and collapse biological detail into something that’s, that’s more simple, that’s more tractable and then use, use this framework to try and also incorporate like neuromodulator receptors.  

Sri    01:12:49    I mean, ultimately what this means is this could, this could just be another set of activation functions because the way these neuromodulatory receptors also work, uh, it’s, it’s pretty non-linear. So this could be like another set of complementary parallel, uh, activation functions that could go into, into, into such a, such a modeling framework. So, so this is, uh, this is one way to do it. And there are groups, for example, like non-cigarette scoop or, or, uh, Yvonne Saltash group that are actually trying to look at ways to, to simplify all the complexity out there, and yet somehow retaining, uh, this flavor of complexity. Uh, so, so what this enables us to do is to, is to, is to model and simulate networks that are not as, um, uh, expensive in terms of, of computational infrastructure. And yet, uh, the, the beauty here is that they’re, they’re retaining, um, a lot of the biological complexity.  

Mei    01:13:50    Yeah. So according to me, what’s interesting is also, you know, you addition truly about mentioned point Sri had, I think what’s interesting is that there are already like assists established approaches to computer neuroscience, allowing us to do that, for example, they have, um, and also how they are actually modeling your transfers in your modulators actually through the ion channel kinetics file, you know, mostly like Cod the neuronal, no, because the neurostimulator mob files. So there are kind of a really like, uh, investigating the biological literature and how the, in your mater effecting the local inductances and also set a bit of a deal to sell. And, uh, they kind of abridged this into mechanisms maybe in the malt file. So I think we can do something similar and we have SS Sri is sad. We can pretty much like model a cell with a changing excitability using, for example, also like alternating I’ll activate your functions, right?  

Mei    01:14:42    Because what takes your trigger a cell to, to spike at a certain time point can differ from another time point, depending on the events and center was in. So we can kind of model this kind of accessibility change or production change as something that’s more, you know, come commonly used in, in, you know, like deep neural networks, that’s actually activation functions or hyper parameter change. So there are always like parallelism in between the two, but we just wanted to open like this discussion that there are really like, oh, this, you know, monetary facts at different levels, including the finding dangerous and the cell can really behave differently and their different contacts. And that, I think that’s also like one point of the paper. And so we really want to bring that into a scope.  

Paul    01:15:24    A lot of what we know from neuromodulators in particular, bringing it back to neuromodulators is through, uh, mammals, right by S uh, systems. Uh, do you think that there’s any reason to look across different, uh, file out like, uh, I don’t, I don’t know anything about neuromodulation in the octopus or the shrimp or something, or, you know, it doesn’t have to be a sea animal or, or, uh, birds, right. I mean, is there, do you think there’s any use in studying the different capabilities of different species and their set of neuromodulators and that landscape, um, or do we need to focus on things that are, we’re so interested in ourselves, our mammal mammalian cell selves, that a human is the goal, but, but, you know, there’s a possibility not in this, wouldn’t be inventing new ones, but incorporating across species principles. Right.  

Mei    01:16:14    Please go ahead.  

Sri    01:16:15    Yeah, sure. So I think, um, a lot of the neural mechanisms that underlie the evolution of behavior is, uh, pretty much remarkably conserved across species.  

Paul    01:16:30    Yeah. That’s what I thought, but there, there must be differences. There must be appreciable differences.  

Sri    01:16:34    Indeed. So, uh, I mean, I, as, as species that we want, right. So of course, uh, uh, the size of brains have, have of course, uh, uh, remarkably changed. There’s also the aspect of, uh, uh, more neurons, uh, perhaps more intricate dendritic architectures that could then invariably, uh, support more receptor types and things like that. Um, so I think overall the primordial function of, of certain neuromodulators is pretty much conserved across species, but indeed, um, with evolution, uh, there are also certain nuances, for example. Um, I don’t know if the octopus brain would have the same kind of receptor diversity, uh, for a histamine as the mammalian brain, um,  

Paul    01:17:31    Octa octopi don’t sleep, do they?  

Sri    01:17:33    Uh, well, uh, okay. Um, they probably don’t, but,  

Paul    01:17:38    Uh,  

Sri    01:17:40    Yeah, I mean, they probably have some kind of a, of a cuisine state. Um, I don’t know,  

Paul    01:17:46    ’cause I, that you sent, you mentioned histamine triggered that.  

Sri    01:17:50    Oh, okay. Wait, uh, so again, uh, uh, V we we’ve traditionally studied the role of histamine in the mammalian brain, um, in relation to sleep and wakefulness. Uh, but, uh, I think it’s a histamine in itself is a distinct neuromodulatory system. That’s probably doing all kinds of things. I mean, there’s a lot more evidence now that, uh, it’s, it’s, it’s involved in, in regulating gamma oscillations and things like that. So, um, all I’m trying to say is that, um, I mean, uh, it, it could be that, um, um, uh, given the fact, uh, that the architecture of, of, uh, neurons is a lot more complicated, um, across the evolutionary tree, uh, that there simply is room to support more receptor types that could do, um, um, uh, or that could bring the board assorted functions, uh, uh, upon, uh, the action of the same neuromodulator. But overall, I think this basic function is probably conserved across species, but the nuances is, is, is, is, is what sets, uh, uh, uh, uh, the function of neuromodulators apart, uh, from, uh, inward, a bridge toward grips. And this is something that we simply don’t know much about.  

Mei    01:19:10    I mean, I think an interesting example is at a event, we have different purposes in different disciplines. For example, if we discuss building MADEC design new robotics, it has a different meaning than what we have in your science, right? So they pretty much like use the exoskeleton or other kinds of a simple architectures mechanic ones to simulate what the animals are, how the animals ambulating or stuff. And I think for different subjects, for different areas, we have different goals. For example, the reinforcement learning in actual scenarios, actually like the danger avoid Denis, or, uh, you know, avoid this behavior is actually one important point of discussion, right? So we avoid the danger to be kind of a preserve ourselves, maybe in why would environmental, this is your focus of study. So I think, um, is kind of interesting to study different organisms, depending on the question you want to answer.  

Mei    01:19:58    For example, um, the lampreys, um, uh, like is there are a lot of studies by Stan Groner, uh, studying the lampreys and the central pattern generator, and also you moderate as we measured her, her really famous study, she used the crustacean, you know, I’ve used, are simpler networks and are easier to tag. And also the crustaceans that have different kinds of neuromodulators, for example, a prep Tolan or something else, not the same as the big four as what we have in the brains. Although there are a lot of primitive functions are largely preserved, for example, fight and flight, and also, you know, PR PR like as a preserving, um, your body heat and, and done. And, um, that’s a certain body temperature reflexes. These are already preserved, and there are like a higher diversity of when there comes to higher domains, uh, cognitive domain, such as social functions, exactly functions.  

Mei    01:20:45    And there’s a big divergence between like individuals. And we can even have subtypes social cognition in people beyond our paper. Like there was a figure represented different spatial temporal, you know, specificities of neuromodulator functions and processes, and only hire, you know, like, uh, temporal and Denis spatial scale. We have a longer extending functions of more like continually continuous feedback and learning and processing and BS. And I think, uh, this is something quite specific to, um, most of the mammals and inhuman that possess social function and us to really like reacting, uh, kind of a really complex environment. I think these kinds of behavioral demands are really different from other species. And I think that’s also why interesting point, and having all this knowledge of all those different species are definitely interesting for us. Scientists will be a different model mechanisms to address certain questions because, uh, you know, like sometimes the, the ambulation, uh, on a, on a kind of, uh, you know, like really bumpy terrain of bio bio and by, by, by people is definitely something that’s less well-studied because we are not advancing or that we are not passing certain species on that because we don’t, we don’t navigate well in this kind of a terrain.  

Mei    01:21:55    Right. So really  

Paul    01:21:55    Depends. It should be studied more if not just for comedy.  

Mei    01:21:58    Yeah, exactly. So it really depends on the question and the mandatory answer. And this is also, um, I think Sri and I earlier during the year and last year, we want you to think of a future study. You know, we are very focused on the evolutionary perspectives of neuro neuromodulations, how it changes from a perspective that is more primitive and, you know, like just basic life function, preserving to something that’s more elegant and evolving higher copied with domains. And we wanted to understand how this is related to the evolution of the brain and increasing folding of our new cortex. So maybe we’ll have this study in the future. Yeah.  

Paul    01:22:32    Tiny questions then shrieks, go ahead. Sorry.  

Sri    01:22:35    Um, so I think, uh, Cori Bargmann, uh, has actually been doing a lot of these seminal studies, uh, really only looking at the role of, um, uh, specific neuromodulators in, in an extremely primitive organism, like the C elegans, right. 300 or so neurons. Um, and yet remarkably this, this primitive organism, um, uh, displace a very rich repertoire of behaviors, um, uh, possibly, uh, due to a neuromodulatory function, for example, um, uh, right from all factions to something else. And it seems that, um, these, uh, neuromodulatory systems are conserved across, uh, um, uh, finally, um, uh, like, uh, C elegans all the way to, uh, to the mammalian brain where, um, it, it seems, uh, that, uh, the similar neuromodulators are involved in, uh, in olfaction, for example. So that’s really remarkable, but the question obviously is, um, how do these neuromodulatory systems also scale, um, as brains expand with the number of neurons and also their architectures, uh, and different brain regions and their interactions and, and how, how is this, uh, primordial function actually conserve despite this whole <inaudible> explosion? So, um, I think this is really fascinating. This is really, really remarkable. Um, and, uh, as, as may said, um, uh, we, we, we trying to, to, to really, uh, skim the surface of how, uh, this could be possible to provide some, some perspectives.  

Paul    01:24:23    That is a, even a larger question, I suppose, but if we’re at three on a scale of zero to a hundred, in terms of what we understand about neuromodulators and three ish on a scale of zero to a hundred and, and incorporating that and making it useful in deep learning networks, there’s certainly a lot more work to do. So I look forward to a lot more of that work for you from you guys and, and wish you luck. I appreciate the conversation today.  

Mei    01:24:48    Thank you for the questions. Yeah. It also leads to more thinking. I think that we’ll have a discussions, you know, internally and also the wider pop, pop, pop lake regarding our work and the things we can do in the future.  

Sri    01:24:59    Yeah. Thanks a lot, Paul, for, uh, the chance to, uh, to chat with you and, um, for the great podcast. And, um, we’ll hopefully a lot more people would realize the urgency of the problem and, and try and join hands with us to, uh, to, to crack neuromodulation a lot more efficiently in the coming years. So, or maybe in the coming months, we don’t know. You’ll see the more hands we have on board, the better, of course,  

Paul    01:25:29    There we go. You heard it listeners. It’s a call to arms to, uh, to join the cause. So, all right. Thanks guys. Appreciate it. Brain inspired is a production of me and you. I don’t do advertisements. You can support the show through Patrion for a trifling amount and get access to the full versions of all the episodes. Plus bonus episodes that focus more on the cultural side, but still have science go to brand and find the red Patrion button there to get in touch with The music you hear is by the new year. Find Thank you for your support. See you next time.  

0:00 – Intro
3:10 – Background
9:19 – Bottom-up vs. top-down
14:42 – Levels of abstraction
22:46 – Biological neuromodulation
33:18 – Inventing neuromodulators
41:10 – How far along are we?
53:31 – Multiple realizability
1:09:40 -Modeling dendrites
1:15:24 – Across-species neuromodulation