Brain Inspired
Brain Inspired
BI 167 Panayiota Poirazi: AI Brains Need Dendrites
Loading
/

Support the show to get full episodes and join the Discord community.

Check out my free video series about what’s missing in AI and Neuroscience

Panayiota Poirazi runs the Poirazi Lab at the FORTH Institute of Molecular Biology and Biotechnology, and Yiota loves dendrites, those branching tree-like structures sticking out of all your neurons, and she thinks you should love dendrites, too, whether you study biological or artificial intelligence. In neuroscience, the old story was that dendrites just reach out and collect incoming signals for the all-important neuron cell body to process. Yiota, and people Like Matthew Larkum, with whom I chatted in episode 138, are continuing to demonstrate that dendrites are themselves computationally complex and powerful, doing many varieties of important signal transformation before signals reach the cell body. For example, in 2003, Yiota showed that because of dendrites, a single neuron can act as a two-layer artificial neural network, and since then others have shown single neurons can act as deeper and deeper multi-layer networks.  In Yiota’s opinion, an even more important function of dendrites is increased computing efficiency, something evolution favors and something artificial networks need to favor as well moving forward.

0:00 – Intro
3:04 – Yiota’s background
6:40 – Artificial networks and dendrites
9:24 – Dendrites special sauce?
14:50 – Where are we in understanding dendrite function?
20:29 – Algorithms, plasticity, and brains
29:00 – Functional unit of the brain
42:43 – Engrams
51:03 – Dendrites and nonlinearity
54:51 – Spiking neural networks
56:02 – Best level of biological detail
57:52 – Dendrify
1:05:41 – Experimental work
1:10:58 – Dendrites across species and development
1:16:50 – Career reflection
1:17:57 – Evolution of Yiota’s thinking

Transcript

Yiota    00:00:03    I often wonder whether this huge variability, this complexity is not necessarily linked to a unique function, but rather there is a very high level of redundancy in the brain that, you know, was never punished. Evolutionary speaking. All these aspects have to do with efficiency. Efficiency in terms of computations and utilization of resources. And I think that’s where really the dendrites shined. Initially, I was thinking that dendrites are really, really important because they allow the brain to do more advanced, uh, let’s say computations that we could do. But I now kind of think that they’re more important role.  

Paul    00:00:55    This is brain inspired. I’m Paul Panta. Poi is my guest today. Jota, uh, runs the POI lab at the fourth Institute of molecular Biology and biotechnology. And Jota loves dendrites, those branching tree-like structures sticking out of all of your neurons. And she thinks that you should love dendrites too, whether you study biological or artificial, uh, intelligence. So in neuroscience, the old story was that dendrites just reach out and collect incoming signals for the all important neuron cell body to process, uh, jota. And people like Matthew Larcom, uh, with whom I chatted in episode 138, are continuing to demonstrate that dendrites are themselves computationally complex and powerful, um, doing many varieties of important signal transformation before signals reach the cell body. So, for example, way back in 2003, jota showed that, um, because of dendrites, a single neuron can act as a two layer artificial neural network.  

Paul    00:02:01    And since then, others have shown that single neurons can act as deeper and deeper multilayer, uh, networks. On top of that, and in Yoda’s opinion, an even more important function of dendrites is increased efficiency in computing that they bring to the table. Something that evolution Shirley favors and something that artificial networks will need to favor as well moving forward. So we talk about some of her work, um, and reflections and ideas about the computational properties of dendrites with respect to cognition and efficiency. As always, go to brains inspired.co. Uh, if you have a couple bucks to support this podcast through Patreon, or if you’re interested in learning more about neuro AI topics, you can check out my online course, neuro ai, the quest to explain intelligence also@brandininspired.co show notes for this episode are at brandin inspired.co/podcast/ 167. Okay, here’s Iotta Iotta. You’ve been, um, studying dendrites for since your master’s thesis at least. Um, so I, I was kind of assuming that maybe you came in from studying something else and eventually came to love dendrites, but, uh, ha have you always been interested in dendrites or how did you come to love the dri?  

Yiota    00:03:22    Hmm, good question. Well, I was always interested in the brain, right? Since I was, um, a high school student, I wanted to study something related to the brain. But, you know, this is a long story. There was no university in my home country in Cuse where they would offer a neuroscience or even a medical, uh, degree, let’s say, as an undergrad. So I studied mathematics instead, and then I wanted to find my way somehow back into biology, and in particular the brain in neuroscience. So I went to USC to study biomedical engineering, which is was the way to get back through a mathematics degree. And there I met Bartlett, Mel and I met Bartlett, like, you know, the first few months that I arrived at usc and he was doing drees. So I, I wasn’t in love with Drees at that time. I just fell in love with them when I started working with Bartlett. Mm-hmm. That’s how it all started.  

Paul    00:04:13    So you were at USC and you decided to leave the United States and go back to, uh, Greece. And I understand that you were like one of the, or the youngest research director at your institution. Is that true?  

Yiota    00:04:25    I was the youngest researcher to be hired when I was first hired. So I was 27, I think, uh, when I got there  

Paul    00:04:32    Position, there were only like three other people right in, in that kind of area of studying  

Yiota    00:04:37    <laugh> In the entire country, you mean?  

Paul    00:04:39    Oh. Oh, okay. That’s quite larger then. <laugh>.  

Yiota    00:04:42    Okay. No, my institute is pretty big, but, uh, computational neuroscience is not big in Greece, unfortunately. Uh, there are less than five labs throughout the country that do computational neuroscience. So it’s a kinda, uh, you know, unique skill around this place.  

Paul    00:04:57    Well, what about experimental neuroscience, because I know that you, you’ve transitioned your lab, and I’m gonna ask you about that later mm-hmm. <affirmative>, um, to include experimentation. Uh, is, is that a big scene in Greece?  

Yiota    00:05:06    Well, there are quite a lot of people doing molecular neurobiology, but not really systems neuroscience. Um, there are a few people that, uh, used to work on monkeys, uh, in cre. Well, some of them still do, but there is not a lot of, um, my electrophysiology or in vivo behavior. So it’s still kind of lonely here. There is at least one other lab that has recently joined and is, um, a person who came from the US and has the expertise for systems neuroscience in mice, and that’s why I also decided to expand. So we’re sharing a lab and we are, uh, he’s helping a lot with, uh, you know, training and, uh, the people in my lab to start doing experiments.  

Paul    00:05:47    Okay. Um, you said it’s lonely. I was thinking that there, there are pros and cons, right? Because a pro is that you sort of stand out and you don’t have much, um, fellow competition in your area. But a con is that, uh, it’s a lonely endeavor.  

Yiota    00:06:00    Yes. I mean, I think it’s really important to have colleagues with whom you can discuss problems, troubleshoot, you know, write joint grants, write papers together, mostly run by them, your ideas, right? And I really felt this difference when I was doing my sabbatical at UCLA in 2008. Then it really stood out how different it’s to be in Greece and to be, you know, kind of a loner on this. Um, uh, and they were, um, I appreciated a lot the collegiality that I found there and the opportunity to talk to people on a daily basis.  

Paul    00:06:33    Okay. So it’s like passing people in the halls and actually physically attending talks and having conversations after and before. Yeah. Um, alright, so I’m gonna kind of skip ahead here, um, because a lot of what you do and have done in the past is computational modeling of dendrites, and you argue essentially that, uh, artificial networks need dendrites. Um, and yet, you know, there’s this been this recent explosion in the capabilities of artificial networks. And I’m kind of curious how your, how you have viewed that explosion and watching artificial networks getting better and better and better. Have, have you just been thinking, ah, they only had dendrites the whole time, or are you thinking, oh, maybe they don’t need dendrites. What’s, what’s that been like? Yeah,  

Yiota    00:07:19    So that’s a very important question because I think that brains need dendrites. And there’s a big distinction between a brain and an artificial system. And the reason why there is such a big distinction is that the brain is under evolutionary pressure. We have a limited size of the scalp. We have a, you know, a neural tissue doing the computations. We run on, uh, very low energy. All these aspects have to do with efficiency, efficiency in terms of computations and utilization of resources. And I think that’s where really the dand rates shine because they bring this efficiency to biological brains in many ways. We can discuss that. Now, you can think of C P T, for example, right? You know, that’s, it’s a really big thing right now. It needs the energy of a small city to be trained. It’s not comparable to my, my brain on what or what your brain is doing, not to mention the size of the infrastructure, the number of GPUs that are needed to run this thing, right?  

Yiota    00:08:15    And it’s still not smart, smarter than a tour, let’s say, seven year old child. So I still think dendra are really important. They’re important for two reasons. One which we haven’t really, um, mastered yet in artificial, uh, intelligence systems is like the ability of the brain to learn continuously, you know, lifelong learning, for example, or to transfer skills from one task to another. There’s great improvement in the new systems out there, but this is still an open task, I think. And the most important aspect, obviously, is efficiency, sustainability. I mean, we are reaching a ceiling in the technology of, uh, cheap manufacturing, and this is not enough to answer to the very high demand of, uh, computational intelligent systems that we need on our daily lives. So efficiency is gonna become a bottleneck in the next few years. And there are already a lot of people out there talking about neuromorphic computing with the idea of doing what the brain is doing so that we can achieve the same or better, let’s say, computations and intelligence with a much more sustainable technology, smaller, cheaper, you know, low cost, low energy. And I think that’s where d rates really matter.  

Paul    00:09:25    So I appreciate the efficiency story and importance of efficiency, but I, there’s a part of me that wants, that, wants to, that is more excited about the, um, expansion of cognitive abilities, the improvement in cognition for dendrites, because we’re living in a world today, right? Where yes, it’s costs a small city to train these models, but you can just go to a medium sized city and it gets smarter and a bigger city and it gets smart. Maybe eventually we’ll get to the earth sized, uh, uh, models and it’ll just be like super smart. Um, so do you think that the, I mean, do you think that scaling up will just accomplish the same cognitive, um, capabilities, or is there some secret sauce in D rights?  

Yiota    00:10:11    That’s a wonderful question. If you had asked, uh, Matthew Larkom, I’m sure he would say that there is a secret sauce there. Uh, I am not entirely convinced to be honest, because, because, uh, deep neural networks, which is essentially, you know, the technology behind the systems, right? So if they scale big enough, they should be able to approach any, uh, problem, any kind of computation. At least that’s what we know from theoretical math, that they’re generate general approximators, they should be able to solve any problem, right? Mm-hmm. <affirmative>. So it could be that we haven’t really reached the size of the network that is required to come up with, uh, cognition. We don’t know, right? That’s, it is also possible that there is something special about dendrites, which however, we have not really put our finger on yet. I mean, there are really cool things that happen in dendrites.  

Yiota    00:11:04    And one of the really interesting things that, uh, our work with Matthew, for example, is looking at is related to perception. And Matthew, these are not my studies, so I’m not, uh, taking any credit for this. So Matthew has shown that when the animal is very close to perceiving a new sensory stimulus, that is when the dendrites light up, and you have this massive activity measured with calcium imaging in the tasks of the distal dendra of cortical neurons when the animal perceives, uh, a stimulus. So it is possible that without dendra, you cannot have this aha moment, right? Which is what happens when you realize that you’re now perceiving something. So that’s, that’s the other possibility that answers your question to, with respect to the magic sauce, can we not do that with dendrites? Is it really that we need dendra to actually have this kind of computation?  

Yiota    00:11:57    I don’t think so, because in terms of a computation, what happens is, like, you know, now the dendra is signaling to the cell body that something really important is coming in. So, you know, kind of pay attention to it, and it’s done within the single neuron by talking the <inaudible> talking to the cell body, which otherwise would not have done, right. If it’s an not important signal, you would not see these calcium, these spikes. You could definitely do that with a pair of neurons, and you have them disconnected and then they talk to one another under specific conditions. Now, the brain might wanna do this with dendra, again, for reasons of efficiency. You, you save much more space by not having, you know, these cell bodies that are huge in, in the, in the medium and having them talk to each other conditionally. You also have a lot of dendra that cover a very small surface area, so that you can have a lot of combinations of when you want this d rate to talk to the body.  

Yiota    00:12:51    And therefore, presumably you can think that you can, uh, let’s say, learn many different things by combining the different D rates with the same neuron, right? Which if you did not have D rate, you’ll have to have as many neurons to implement this computation. So I’m not sure whether it is a matter of a special sauce or just a matter of, you know, this is a much more efficient, smarter, uh, way to do this, to solve the same problem. We don’t know for sure. I mean, I wouldn’t bet all my money <laugh> on the fact that there is no special sauce in Dres. In fact, I wish there is, right? I’ve dedicated my life starting on Dres. We just haven’t seen it yet.  

Paul    00:13:29    I, I, I was thinking, I, I I just had the thought that, um, a line in math right, is an infinite set of points mm-hmm. <affirmative> all jammed together. And I mean, not only would you consider, you know, replace a din RTE in an artificial system with another unit, but you might have a hundred units like all connected to together because dendrites are so complex. They have like those, these propagating properties and they’re shape, they have different shapes, they have different ionic conductances, and, um, all, all sorts of things are happening and in constant flux. Um, and so is that a crazy idea to just chain up a bunch of units together with different properties? It’s  

Yiota    00:14:06    Not a crazy idea. And people have already proposed this, uh, this chain of, uh, units that represent, let’s say the three compartments, not the whole end rate, each of which is doing, you know, its own computation. And this is supported by experimental data. In fact, we have proposed that a cluster of signups is, might be the smallest computing unit because you only need something like, you know, five to eight spines activated together to generate, uh, eri spike within a compartment. And if you consider this computation and a linear computation, which essentially that’s what it is, right? You’re crossing a threshold, you’re mimicking the SMO of the cell body, then you can think of having many such clusters of signups as within Alan, that each of them would be a computing unit.  

Paul    00:14:52    All right. I’m gonna go real broad here. Where are we? I mean, you mentioned earlier already that we don’t know the extent, we don’t know everything about dendrites. Um, where are we in that progress? I mean, you know, we have hodge Hodgkin, Huckley Huxley models for neurons, right? And conductances and, um, membrane capacities, et cetera. Um, but in terms of, so I guess this is like two, two wrapped up, uh, questions. One is just where are we in the broad scope of understanding the properties of dendrites and how they contribute to neuronal function and, and cognition? Um, and now I’m forgetting the second, uh, questions. We’ll stick with the first.  

Yiota    00:15:31    Where are we? Okay. So in terms of tools, uh, for modeling dendra, we use the same tools as, uh, as, uh, we used to model the cell body. So Hui equations are perfectly fine for, uh, describing the different ioni conductance that are found in the, in the dendra capacit also also describe their, uh, membrane properties. But in terms of experimental data, we don’t know as much. And unfortunately, people are now kind of abandoning the methods that would tell us more about drees, which would be, uh, patch clumping in vitro. So we’re kind of moving away from that technology because it’s not as sexy, and people are doing a lot of in vivo experiments now. Yeah. Yeah. And in vivo, you cannot really map out the type of conductances that you have, you know, different parts of the dent rates, and you cannot measure their, um, concentration, their conductances.  

Yiota    00:16:24    So we know mostly about Dres from studies that were done in previous years, and that’s, that’s a shame. And we know very little about inter neurons, we know more about pyramidal neurons, but we don’t know a lot about the inter neurons and urs coming such a large variety, you know, of shapes and, um, and electrophysiology profiles. So when we want to study in neurons that problem, the dent rate of neurons, that problem becomes even bigger. So I am hoping someone who will listen to this podcast, <laugh> will want to go back and study Dres using techniques that can tell us about their electrophysiological profiles. <laugh>.  

Paul    00:17:01    Oh, that’s gonna be a hard sell.  

Yiota    00:17:03    Yeah, I know. <laugh>, maybe we will do it.  

Paul    00:17:06    Yeah, you have to do it, right? Um, and, and you are kind of starting to do it. I don’t know. Are you patch clamp clamping?  

Yiota    00:17:12    Um, we do have a patch clamp facility here in the University of Credit that in fact, uh, my lab has helped establish many years ago. But in the new direction of the lab that we are starting now, we are also turning twin people experiments.  

Paul    00:17:24    Okay. Yeah. All right. So again, we’ll, we’ll come back to that, but, um, I, it’s, uh, overwhelming. So, okay, the neuron, right, which used to be considered the functional unit of the brain, um, they’re, we have lots of neurons and they’re all connected and, um, re currently connected and in complex ways. And you can, you know, kind of think about the small circuit level in like brain areas and that that’s kind of more core screening things. And, and you think, well, okay, I’m getting a little bit closer to like, cognition, whatever that means mm-hmm. <affirmative>. Um, but it’s already overwhelming to sort of think about the number of neurons and the way that they’re connected, then to think about not just the number of neurons, but you know, there are like, what, I don’t know how many different types of neurons there are these days. What’s the classification? Hundreds  

Yiota    00:18:10    Probably. I, I remember something like 30 inter neuron types at least. And these are the main classes. Yeah. And you have all the subtypes, I dunno how many different neurons we have by now. Yes. I would say  

Paul    00:18:21    Then how many different types of dendrites are there? You have, do, how  

Yiota    00:18:26    Do you define versus,  

Paul    00:18:28    Well, that’s, I know, that’s what I’m asking, right? So the complexity is, is just, um, off the charts, right? Because Yeah. I’m, I wanted to ask you about what an active dendri is versus a passive, maybe you can answer that real quick before  

Yiota    00:18:39    I can. And an active dendri is one that is of generating, uh, thetic spikes. The passive one is one that only integrates the signals without ever crossing a threshold for Ari spike. And therefore the integration is, you know, subline linear to subline, whereas the other type is super linear in some cases. And then it can get more complicated than that <laugh>.  

Paul    00:19:02    Yeah. Yeah. You’re almost wincing right now. I mean, and then you have, you know, the, just the different, um, distance from the cell body, the different shapes, the different connectivities between the neurons where they, uh, between the dendrites, where they join and where they branch, et cetera. Um, you mentioned synaptic clustering earlier mm-hmm. <affirmative>, uh, and we’re gonna talk about plasticity because then you have a hundred different plasticity rules. Yeah. It, it just seems, d d do you ever just sit back and think this is too daunting?  

Yiota    00:19:29    Yes. And I often think whether it doesn’t really matter that much in the sense that in the light of evolution, everything that worked would get selected. And maybe we are, uh, placing too much attention on things that are different, but they’re different simply because they all observe the same function. So I often wonder whether this huge variability, this complexity is not necessarily linked to a unique function, but rather there is a very high level of redundancy in the brain that, you know, was never punished, evolutionary speaking, because it did not have a, a, a negative impact on, on the brain function. That if that’s, uh, you know, even remotely true, it would explain why we have such a huge variability in the brain and why we don’t really need to explain every piece of it, you know, because maybe it’s doing the same thing, hopefully.  

Paul    00:20:28    But you, you mentioned earlier, you think algorithmically and algorithms are very clean and well,  

Yiota    00:20:34    Not evolutionary algorithms.  

Paul    00:20:36    Oh, well, explain that to me then, because I think of mathematical algorithms are clean, and it’s almost like modern computational neuroscience sees like an algorithm, like, let’s say back propagation algorithm, right? Just to throw one out there, sees that as like, somehow evolution is driving toward that. But it seems much messier to me that, so evolution doesn’t work that way. It’s not normative.  

Yiota    00:20:58    No, I, I don’t agree that evolution is driving towards that evolution is definitely selecting the optimal solution, but it’s not actively searching for it, which is what the back algorithm is doing. Hmm. So in an evolutionary algorithm is essentially randomly making changes like mutations in genes. That’s how they work, right? They randomly happen, and then those that lead to some advantage, they improve the fitness of the organism, they get selected, and the others slowly die off. So we get better by chance. We don’t get better by, you know, instruction like the babar, which essentially finds out how much away you are from your optimum target. You correct yourself to the right way so that you get there faster. That’s another kind of optimization that I don’t think personally is what the brain is doing, and it’s definitely not, not what evolution has been doing for, you know, the previous eons.  

Paul    00:22:00    So, forgive me, I, I’m gonna probably ask in a naive way, I, I, I guess what I’m sort of driving at is how, how do I th how should I think of algorithms, for instance, and mathematic like an equation, right? That we want to map onto the, how the neurons and dendrites and circuits are functioning. Do I think of it as like an attractor sink that it’s just kind of, uh, you know, I guess via gradient, um, circling around, you know, like some sort of stable or, or unstable attractors? Like how, how should I think about the mathematical tools that we use to map onto, uh, our messy biological brain?  

Yiota    00:22:39    First of all, there are mathematical algorithms for implementing, um, you know, evolutionary algorithms, which is what I describe random changes. And then you just keep the, the, the, the good solutions. And the way you imagine those, at least is in my mind, is not like a attractor that continuously, you know, uh, travels towards the, the minimum point, which would be the center, but it’s like having multiple wells on a surface, and you try to move to the optimal. But meanwhile, you fall into many of those local, minimal, and then there is something that push you out out of this. Well, you know, the energy goes, uh, up and, and then you jump out of it and you fall to another one. And then slowly you converge if you ever converge to the, to the right, uh, minimum. Um, there is an algorithm called, uh, simulated, a kneeling, for example, which is doing, uh, precisely this.  

Yiota    00:23:30    Um, now if we wanna go back to neurons, the way that we learn is dictated by the different plasticity rules, as you mentioned initially. Now, these plasticity rules, there are so many, and, and, and they’re very different, and they don’t necessarily simulate something like this, right? Maybe they simulate a part of an algorithm, uh, but when you look at it from a higher perspective, will look like an evolutionary change. And maybe we have different plasticity rules because we want to attend to different aspects of learning. For example, if what I want to do, a a, a young child, for example, doesn’t really know much about the world, right? So the best thing to do is to attend to regularities, things that happen together, things that happen coincidentally. So how do you detect irregularity while everything that fires together wires together, like hep said, <laugh>. So you have a local he and rule, which is just looking for these things that consistently occur together.  

Yiota    00:24:36    That will be one hypothesis for why you have a he and rule, right? Mm-hmm. <affirmative>. Mm. Let’s see. And then this, the, the happy and rule is implemented in many different ways. You can have the spike timing dependent plasticity, which is very sensitive on the timing of this events happening together. We have the BBC M which is dependent on changes in the calcium. This is a slightly different timescale, but again, it’s kind of a correlation based algorithm. And you have, you know, various of these kind of rules where neuromodulators come into play. Uh, but they all under the same idea, at least that’s how I understand it, that we’re looking for regularities in the environment. Hmm. Yeah. And this is, uh, unsupervised, right? There is no teacher there telling the system that this is the right thing to do or not. So the, the teaching signal has to come from somewhere else, right?  

Yiota    00:25:30    Yeah. And, uh, in animals and humans, we, our brains are directly connected to our bodies, so we execute actions, and then the teaching signal comes from the result of the action, right? If you’re trying to reach to take a glass and you fail to do it, then you know, it doesn’t work. But if you succeed, then you have some kind of a signal that I got this thing, right? So the teacher comes, the teaching signal comes externally, it’s the result of the action, and not something that happened inside the brain, which somehow, of course, is propagated back to the brain and, and serves as a feedback within mm-hmm. <affirmative> feedback information. So it’s a loop that involves, you know, a thought, an execution, an end result, and a signal back in, uh, in artificial systems. What, what is the, the feedback,  

Paul    00:26:24    Right? So it’s just an, an external label and it gets back propagated. Yeah.  

Yiota    00:26:28    Which we give it as the answer to the fact that the artificial intelligence system does not have an arm to reach out and, and execute, uh, an action and generate the feedback.  

Paul    00:26:39    Yeah. But, but I mean, even, okay, so let, let’s go back to a biological system. Um, so you have that feedback loop, but then not only do you have the heian style plasticity, you have homeostatic plasticity, you have intrinsic plasticity, and these types of changes are constantly flu. It’s not like they’re sitting there and waiting, like homeostatic plasticity is just a, a function of the cells needing to survive and, you know, self-regulating, et cetera. Um, so how do we square all of those dynamic changes with some sort of, you know, feedback that eventually it gets Right. <laugh>, right? Enough.  

Yiota    00:27:16    So I think that all these plasticity rules, they are, uh, solving different problems, and they act at multiple scales. Like homeostatic plasticity is needed to make sure that you don’t keep changing the synaptic weights in a positive manner so that you reach a hyperexcitability, and then you have seizures in the brain, right? Mm-hmm. <affirmative>. So you have to scale the weight’s down to, to make sure that you don’t burn your brain. You also have to scale weights up occasionally, if there is something happening in the brain that is pushing towards depressing signs, because you want to maintain accessibility. So as a result of learning, there will be changes in the syses. And then to make sure that these changes are not driving the system away from a stability point, you have tatic plasticity. So they need to operate together with heavy and plasticity, which is going to be upregulating or downregulating selectively in some weight without a, you know, a way to stop this thing, right?  

Yiota    00:28:12    Mm-hmm. Because if you don’t have a, uh, gradient dissent descent, let’s say, like system, then you don’t know when you actually reach your target. It’ll keep on changing. And that’s another, another one of the things that we’re, uh, you know, frequently thinking when we’re building model models, how do the neuronal circuits know when to stop? What is the stopping criteria in the brain, right? One stopping criterion could be that you, let’s say you reach the limit of the resources that are available to the syses, so you don’t have any more plasticity related protein, so you have to stop. Um, another criterion could be, I don’t know that you’ve done so many changes and you run out of a tp, let’s say, you know, there, there will mostly be related to resource availability, rather than a signal that says, okay, you’ve got it right now, you need to stop, because we don’t really know what is it that tells a circuit, you know, that now your task of updating is done.  

Paul    00:29:13    Yeah. How do you choose what to implement in, in your, uh, model, in your models then?  

Yiota    00:29:20    Well, in fact, we’re implementing lots of plasticity rules, all of them working together, and then we take one out, we take the other out and see what is the impact on the, on the model, on the model’s ability to learn. And we try to figure out what is the contribution of each one of these rules. And yeah, I mean, I’m not sure if it’s the right way to do it, but it’s definitely a good way to gain insight as to the necessity and the comple complementarity of the different rules that operate.  

Paul    00:29:48    I mean, what other way would there be to do it?  

Yiota    00:29:51    Well, many people just use one plasticity rule. Oh, sure. You know? Yeah. And look at what you can get with a system that implements this learning rule. And I think that’s a big problem because you miss out on the interactions between these different kinds of plasticity rules and the contributions that they have, let’s say, at the circuit level, versus a much bigger scale or a smaller scale.  

Paul    00:30:13    Hmm. All right. Let me ask you this. Uh, so there are, you know, all these different types of plasticity, all these different types of dendrites, all these different types of neurons. I believe I read in one of your manuscripts that you suggested that the dendri, we might, should start considering the dendri, the functional unit of the brain. So I mentioned earlier that for a long time the neuron was the functional unit of the brain, and now people are talking about, you know, circuits, micro circuits as the functional unit of the brain. And, uh, relating related to cognition. Does there need to be a functional unit of the brain and might dendrites be that in, at least in your head?  

Yiota    00:30:53    So the people who first propose, let’s say that dendri should be the functional unit of the brain. They did. So back in 2010, in fact, there was a nice review paper by Bronco and Hauser, uh, which was based on our work and their work, and many people’s work showing that d are essentially doing, uh, non-linear computations, like the cell bodies. Wait, let me, so  

Paul    00:31:15    I, I’m sorry to interrupt you, but I, I mean, I’m just would be amiss without mentioning your work in 2003, showing that a single neuron is comparable to a two layer artificial neural network. Um, yeah. And then, and, and since then, there have, you know, that number has grown, right? And so I was Drews the first study to show that  

Yiota    00:31:34    My knowledge, yes, we were the first, uh, uh, to predict that essentially dendra act like small cell bodies and they can solve the kind same kind of problems using a similar activation function as a model activation function. Like, uh, you know, the cell bodies of neurons, this was back in 2003, um, and then soon after people started talking about having more such non-linear computing units inside the neuron. So not only individual drees, but maybe you can break them down into their, uh, distal optical, proximal optical and basal optical compartments. And in those compartments, you have multiple of these non-linear units. So you have like a three layer, let’s say, or a four layer neural network. And then recently in 2021, there was a nice paper by the lab of, um, uh, Mickey Landon, and where they showed that you can may even need, uh, deep neural network of up to seven layers to approximate the temporal computations that are done in a single cortical neuron. Yeah.  

Paul    00:32:37    Okay.  

Yiota    00:32:38    So it’s increasing. We started with a two layer pointing to, uh, dent unit, and now it’s becoming a multilayer, a deep neural network, uh, in a single neuron.  

Paul    00:32:48    Yeah. Okay. All right. I just wanted to mention that because you were talking about the 2010, uh, review where the, the authors suggested that dendrites might, uh, be cons should be considered, uh, a functional unit of the brain. So, uh, carry on. Sorry to interrupt you.  

Yiota    00:33:01    Yeah, yeah. So I was saying that I think we are now beyond the individual D rate being a computing unit. And the computing unit is much smaller. And I think that it’s, it’s just a bunch of signup says a small clusters of signup says, oh, that will be the, yeah. The computing unit of the, you know, the computing unit of a neuron, let’s say. Okay. Because you have units at multiple, uh, levels, as you said, the micro circuit could be the unit of, uh, cognition. So it depends on what kind of unit that we’re talking about here. Yeah.  

Paul    00:33:30    Right. You, okay. So you mentioned the synaptic clustering earlier, and I was gonna ask you just to explain what that was, and its significance because you just brought it up again in terms of being a functional unit. So could you describe that more? Yes.  

Yiota    00:33:42    So this was another, uh, one of the early studies that we’ve done with Bartlett Medvac in 2001, where we essentially showed that if you have dent rights that are capable of generating spikes, they’re non-linear dent rates, and you have a plasticity rule that depends on detecting, uh, inputs that fire together, and therefore they should wire together. So a heavy and plasticity rule, but which rule is now local. So it depends on what happens in the dent rate, not what happens at the cell body. So if, if two inputs, let’s say fire together and they caused Ari spike, then they would undergo plasticity within that dent rate, and they will become strengthened together. Mm-hmm. <affirmative>. And as a result of such a rule, you have all the spines or the inputs that are correlated in terms of their firing profile to be co strengthened within a given dent rate.  

Yiota    00:34:32    And therefore, you form small clusters of syses that are spatially close to each other because of their temporal, let’s say, correlation. And temporal correlation in this case would also mean somewhat of, uh, uh, functional correlation, because typically inputs that project onto the same dent, right. And fire at the same time, they also carry similar information. If you think of the visual cortex, they’re probably sampling from the same location in the visual fit, let’s say. Yeah. Or if you think of the auditory cortex and the animal is hearing a particular sound, this input would carry frequencies that are found in that particular sound, in that word, if you want. So there, the functional correlation would result in the spatial temporal, um, uh, strengthening of inputs in a small part of theEnd rate. So that’s, uh, which is mediated by this, um, heian rule, which is now local, so considers the ability of dent to fire.  

Yiota    00:35:27    As a result, now you have the formation of groups of syses within the d they’re close to each other, they are strengthened together, and they have the ability to fire the dand rate. Mm-hmm. <affirmative>, we think that this is a unit, because if a signal that comes in by, you know, four or five syop says hiring a significant enough information that has induced plasticity this way through this, uh, you know, formation of a cluster of synapses, then this would be, you know, an independent computation that is propagated to the cell body through the theri spiking mechanism. So that’s what we think is, um, you know, why we think it’s a computing unit,  

Paul    00:36:09    A computing unit of a neuron,  

Yiota    00:36:12    Um, you could think of it, of a dendri, because you can have multiple such clusters within a dendri.  

Paul    00:36:18    Okay. Cuz I wanna, you know, the, the dream right is to link these really low level mechanisms, low level, like a synaptic cluster to higher level cognition, and not just like, say, well, you know, uh, piano playing is linked to these three clusters that’s, um, naive. Yeah. But have some sort of way to look at the different scales, right? Yes. And kind of link them. And it is that  

Yiota    00:36:47    I will answer that. Okay. Yes. So, first of all, I should say that this, uh, clustering was a prediction back in 2001 when we used our models, and now it is verified experimentally in numerous studies. Oh, cool. People see, see that experimentally, you have the formation of these clusters. And many of the times, most of the times they carry correlated information. So, you know, a prediction that was 20 years ago, uh, was verified, um, you know, it started being verified 10 years ago, and now many papers come out. So I think that’s really important for the role of models that I wanted to highlight. Now, how do we link this to a computation at a higher scale? So we have some really nice, I think, work with the lab of Aino Silva, which was published in 2016, where we showed that the mechanism that links information across time, so like two memories of an animal experience that are separated by a few hours, is through the synaptic clusters.  

Yiota    00:37:46    Because you have one memory, let’s say a given context that is formed in a population of neuron neurons, and then a couple of hours later the same animal experiences a different context. And now the actions that carry information about the second context end up being co clustered to the actions that, uh, carry information from the first context. Why is that? Well, there are interacting plasticity rules here. One is the plasticity of intrinsic accessibility, which says that a group of neurons that has learned something will remain excitable for several hours, and therefore it will be, uh, let’s say, ready to capture another memory. And this will be the mechanism that also underlies the formation of episodic memories in the brain. So that’s how you link them. But beyond storing the memories in the same population of neurons, we predicted with our models that they would also be ending up forming clusters into common dand rights should say that this was a prediction of the model. And now we have a paper under the review, which hopefully will come out soon, that supports, I wouldn’t say a hundred percent verifies, but supports this idea that this linking with indites is the mechanism by which you buy information across time. So that’s how you go from a dendrite to a behavioral phenotype, you know, at the circuit and the behavioral level, which is how do I take to independent entities to memories and associate them and link them across time.  

Paul    00:39:17    So, I mean, I I, I’m not trying to criticize or harp on this at all, but you know, I’m, the real dream is, is, um, connecting synaptic cluster to composing a symphony or, um, imagining a, you know, story or some, you know, like some quote unquote higher cognitive ability.  

Yiota    00:39:37    The way I think about it, but this is a hypothesis, is that these small clusters, we have a review on this. Every cluster is like encoding a piece of a sound that you frequently hear together, like a word, right? Mm-hmm. <affirmative>, which consists of phone names, because these phone names are frequently encounted together in a given word. They are encoded by a small cluster. So you can think of clusters as words or pieces of music, and then you can combine them in different ways by activating different dres, let’s say, or theum compartments that contains these words and generate sentences and generate, uh, sounds or music or a symphony. Okay? So this gives you the opportunity to form many, many different combinations of items that have a meaning. They, they form a word or a sound, and that’s how it’s easy to encode this, uh, you know, using few resources like five signups says. But in a way that is when you activate them, the signal will go through because it’ll generate a direct spike. So it’s a strong enough input, let’s say, and it’s a way to increase the signal to noise ratio also by having these small clusters carry the meaningful information. Hmm. That’s how I think about it. Um, but as I said, it’s a, you know, a hypothesis.  

Paul    00:41:03    I mean, thinking just about music and you said phone names. I, I had David Popol on a long time ago, and I’ve had other people also, you know, even recently, Earl Miller, and they’re interested in oscillations mm-hmm. <affirmative> and their, their contributions to, in David’s case, like, uh, words and, you know, the different bits, uh, of words like the phone ees and the, you know, uh, you know, whole words, et cetera, and just the rhythm that we speak and understand, et cetera. Do you have, are, do you consider waves, uh, as important? Have you thought about that as part of the story with an interaction with the synaptic clustering?  

Yiota    00:41:40    To be honest, I haven’t thought about that a lot. Okay. We haven’t modeled the waves in our models at all. I think they are important for sure, uh, especially when you think about the role of intra neurons and how they are modulated by, you know, oscillatory rhythms. But, um, yeah, I, I, I don’t have much to say because we haven’t really worked on that yet.  

Paul    00:42:00    Yeah. But you have, you do incorporate, uh, inner neurons into your models? Uh,  

Yiota    00:42:05    We do, but uh, we don’t, we don’t really study rhythms that much. We have them being modulated by rhythms, like in, in their input. So the fire at the Yeah. Peak or the draft of teta depending on the experimental data. So we do that, but we haven’t really static explicitly what would be the impact, let’s say, of this rhythms on encoding. Hmm. Which would be a very interesting question to  

Paul    00:42:31    Ask, I think. Yeah. We are kind of enforcing them in some way then, but Yes,  

Yiota    00:42:34    Exactly. We’re enforcing them. They, they are not, you know, an American property of the system. Right?  

Paul    00:42:41    Yeah. You define the, the timing. Um, the other thing I was gonna ask about, when you’re talking about intrinsic plasticity, um, so this has to do like with your work with Silva, the, the Ingram are you, so you buy into the, the idea of an Ingram that, um, so, so the with intrinsic plasticity, right? You have, um, uh, correlated signals coming into a neuron and it, it stays active or ready to be active in a state for a short period of time, right? Mm-hmm. <affirmative>. So then you can associate more things and, and that would be an Ingram cell. Correct.  

Yiota    00:43:11    So an Ingram cell, by definition is my definition list is, is a neuron which is involved in the encoding of a particular memory. Mm-hmm. <affirmative>, that’s why you call it an Ingram. So you learn something and this something is somehow stored in the brain. It’s, you know, somewhere. And this somewhere is, uh, you know, a group of neurons that when you try to recall that particular memory, they become active. So that’s an engram, uh, an engram cells, it’s a part of a memory, it’s a cellular correlate of the memory. And engram cells, there is a lot of literature saying that they remain active for a few hours mm-hmm. <affirmative> so that they can, uh, capture more memories and they remain active because they have increased levels of cre phosphorylation. So essentially it’s like lowering the threshold for, uh, somatic activity. So it’s easier that they get, uh, activated by subsequent inputs. So yeah, I think the agram is a valid story. Many people are working on memory engrams. How else would a memory be stored in the brain? It has to be somewhere in the, in the neurons. Right.  

Paul    00:44:13    Well, that’s what I was gonna take us to next, and I’m sorry I’m, maybe this is, I’m getting us too far, um, a field, but someone like Randy Galal would say, given, you know, this just, uh, constant turnover of plasticity, connections, disconnections, new formations, strengthening, weakening, homeostatic intrinsic, that all that’s too changing and you need something much more stable. And, you know, his idea is that you have to encode it in some sort of longer form, um, uh, material like r n a or d n a or something. Right. And, um, I guess the story would be, uh, that during this, uh, plastic time, this intrinsic excitability time for a few hours, it’s actually the, uh, neuron like, um, subcellular machinery, quote unquote molecules, right? Encoding that activity in some sort of r n a sequence. Um, uh, does that, I don’t, I’m not sure if you’re familiar with that story, but his argument is that, that it is just, you can’t, an Ingram is too, um, plastic. It’s changing too much. Like how could a memory be stored in an Ingram?  

Yiota    00:45:18    Okay, so there are two answers to this question. First of all, I, I agree that there is a lot of, uh, dynamic activity in the brain. And in fact, this turnover is extremely useful, at least in our models and in animal models. It shows that if you have higher spine turnover than you learn, learn faster. One reason could be because you’re overriding previous stuff, right? And then that’s why you learn faster. It is possible. But on the other hand, the number of, uh, signups is that are activated every time we try to recall something or we learn something, it’s very small. It’s like 2% of the inputs in a given neuron. So I don’t think that capacity is really a problem here. Every pyramidal neuron has 10,000, 10,000, um, acceler inputs. If you only use your 2% of those for a, a given memory, and then consider that you have all those combinations of neurons that you can use to form a cellular engram, you know, the number is huge.  

Yiota    00:46:14    And even if you consider what we are claiming that the, the actual engram is at the dendrites level and not necessarily the neuronal level, then it decreases massively. So I’m not sure I’m convinced with the argument that we run out of space in terms of storing memories. That’s one thing. The other thing is that you have much higher, let’s say, turnover or dynamic activity in particular brains of the, a particular areas of the brain, like, uh, hippocampus for example. Yeah. Where it’s job is not to store these memories forever. It’s job is, let’s say, to create episodes. This job is to link things together, and then according to the, uh, two stage memory hypothesis, whatever is really useful information is extracted and, and stored elsewhere in the cortex where the turnover is slower and the hippocampus is there essentially to reach out to these places and bring back the pieces of information that will then be integrated and form a memory. So there are solutions to this problem by existing theories. Um, and I don’t think they are unrealistic. Now, it is also possible that some aspects of, if you wanna call it memory, are also stored in the dna. People talk about, uh, epigenetics mm-hmm. <affirmative>, you know, um, it’s another mechanism for storing memories for a very long time. And I am not rejecting that idea. I just think that those are different kinds of memories, um, than the ones that we use on a daily basis. Um,  

Paul    00:47:42    Okay. Yeah. Yeah. So this is another thing I was gonna ask you. Um, so I’ll just ask it now. Sorry to kind of interrupt there, but, but yeah, I mean, should we think about memory as being, you know, at multi scales also, right? And, um, that kind.  

Yiota    00:47:56    Absolutely. Yeah. Yes.  

Paul    00:47:57    So you’re familiar with like that distributed and distributed? Yeah.  

Yiota    00:48:00    Yes.  

Paul    00:48:01    Because you’re familiar with the r n a transfer in worms and what is it? Sea slugs? I don’t, uh, where, um, you, you can, um, train a conditioned response in one organism and then extract some rna, put it in another organism, and that new organism will have that quote unquote memory, which is a behavioral output in this case. Behavioral,  

Yiota    00:48:23    Yes,  

Paul    00:48:23    Yes. But how should, so then you think, well, maybe that’s not that complex of a memory. Maybe it’s just that is somehow encoding the right, um, activities for that given behavior or something. So is that how you think of it as kind of a multi-scale memory? Yes.  

Yiota    00:48:37    Yeah. I think that also experiments in mice when, uh, they’ve shown that, uh, there is a hereditary aspect of different behaviors, uh, that they think they are, um, uh, you know, transmitted, let’s say to the offsprings through, um, epigenetic mechanisms in the, in the sperms and the sperms, I think, or, or both of the eggs can remember quite, uh, and seen in multiple generations later, which, which are changes in the behavior. So, uh, that’s why I’m saying I’m not rejecting that idea. I’m not fully convinced either, because you have a lot of, um, uh, behavioral mimicry, uh, here. So you can, I dunno, if you can for sure, you know, make the distinction between an animal learning from another animal through behavioral mimic versus, uh, you know, inherited memories. Um, but yeah. But, um, I like to be, uh, open minded. Okay. Let’s say, uh, and definitely I believe that that memory is multi scale and distributed, and there are different kinds of it, uh, taking place, uh, you know, different parts of the brain and possibly other organs.  

Paul    00:49:44    It’s fascinating. I, I find myself, um, sort of surprisingly, uh, to me just, um, becoming more and more enamored with the multi-scale capac, just the capacity, right. A as for that to be such a beautiful, it’s becoming more and more beautiful to me as we go along. Um, anyway, it’s nice to still remain in awe of, of things, right? After such a long time studying stuff, you have children, right?  

Yiota    00:50:11    Yes. Three.  

Paul    00:50:13    Three. Oh, okay. Yep. Oh, man, that’s, that’s like, uh, three too many for me. But, um, I have, I have two kids, but, um, I Do you ever worry like, uh, are you like me that your former self wasn’t as careful and, uh, um, not, not the best human <laugh> and, you know, that I, I worry, what did I pass on? Did I pass on some trauma through epigenetic mechanisms to my children? Do you ever think about that? Yeah.  

Yiota    00:50:35    Yes, I do. I do. Especially because in our line of work, we work very long hours, we are very stressed and, you know, it’s possible. I mean, there are studies showing that the stress  

Paul    00:50:46    Yeah.  

Yiota    00:50:47    You know, can impact, um, the embryo in many unforeseen ways. So yeah, I hope not. But <laugh>,  

Paul    00:50:55    Maybe just the, the good parts, hopefully.  

Yiota    00:50:57    Let us hope.  

Paul    00:50:58    Yeah. Keep telling myself that. Um, so again, I’m kind of jumping around here, but, um, one of the things that dendrites do, uh, is they had this non-linear effect, right? Which is kind of emulated in sigmoidal functions somewhat mm-hmm. <affirmative>, um, in artificial networks. But these days, artificial networks don’t care about sigmoid functions. It’s all relu, which are these like linear, um, functions. And they do just fine. What does that mean? What, what should we take from that?  

Yiota    00:51:24    Hmm. Very interesting point. In fact, the is not, I mean, most of the people, they don’t use the fully linear one, right? It’s saturated even. Uh, yeah. So, so it’s not linear, first of all, it’s not sigmoidal.  

Paul    00:51:37    It’s not sigmoidal. It’s not lu, yeah. Yeah. It’s,  

Yiota    00:51:39    It is not sigmoidal, but you couldn’t call it linear. Right. Um, it’s two,  

Paul    00:51:43    It’s linear with two parts. I mean, do we, how picky do we need to be? Because, well,  

Yiota    00:51:48    Smo it can, any function can be approximated with small linear parts, if you think about it that way.  

Paul    00:51:53    Okay? Okay. True. Your calculus is  

Yiota    00:51:55    Coming from how many linear parts do you need? Right? That’s, uh, the big question here. Okay. Um, yeah, good question. They’re doing great with Relu. I know. And we are using it in our networks as well. Maybe it has to do with, uh, the plasticity rules, the, the learning rules that these networks use, which are not very similar to the biological ones.  

Paul    00:52:21    Well, and that, so would the suggestion then be that those artificial learning rules are more powerful than the biological ones?  

Yiota    00:52:30    Well, I think that nobody can, uh, say that, uh, backdrop is not a super fa powerful learning rule. Right? Well,  

Paul    00:52:40    Well, people talk about how ungodly slow it is  

Yiota    00:52:45    Because you  

Paul    00:52:46    Yeah. Because you’re just taking very small nudging steps right. Toward a gradient. Right. And, and it takes a machine. Yeah. But  

Yiota    00:52:53    You’re always taking the right steps, <laugh>, you’re always moving towards the right direction. So in fact, uh, you know, happy and rules are slower.  

Paul    00:53:01    Well, there’s a guarantee, but computationally slow, perhaps I, I’m not an expert enough to argue about this, but, um, yeah. That, that’s what, yeah, no,  

Yiota    00:53:09    I, I I don’t think that the argument is that it’s very slow. The, the biggest argument for us, at least in your scientist, is that I am not convinced that it’s biologically plausible, um, personally, and, um, uh, because I don’t see how exactly would, you know, one layer know what the signups of a, a layer, uh, you know, three, uh, let’s say three steps, uh, deeper, how would that know a particular weight of a neuro, how did it contribute to the output that was computer five layers higher?  

Paul    00:53:43    Right? Yeah. And it’s, it’s clearly not biologically plausible as such, but I mean, you’re aware of the approximations, right? I Yes. About many of the approximations.  

Yiota    00:53:51    Yeah. Yeah. Yes, yes, I am. Uh, that’s, that’s one thing that I’m not convinced of. And the other thing that I don’t really buy with, with this c o e of correction is that a lot of the studies show that dendri, in fact, um, compute associations, they compute coincidences. That’s when you have the generation of the genetic spikes. It’s not when you are making an error and therefore you don’t have input coming in. But when you have two inputs that happen to coincide, so the, you are positively de detecting, uh, conjuncted signals mm-hmm. <affirmative>, uh, whereas in a, in a method that is based on errors in order to correct a system, it doesn’t really align well with the experimental evidence that that rates normally fire when you have contracted inputs coming in. So these are the two things that for me, they don’t sit very well with, uh, you know, the way dentes are teaching the brain, um, are helping the brain to learn.  

Paul    00:54:52    I Is that why you’re modeling, um, spiking neural networks?  

Yiota    00:54:56    Uh, yeah, we’re modelling all kinds of networks Sure. By physical spiking and the artificial neural networks. And, um, I think the spiking neural networks are important for many reasons. First, you know, they’re closer to the real team, right?  

Paul    00:55:10    Oh, I thought I gonna say efficiency  

Yiota    00:55:12    <laugh>. No, I was gonna say they can generate spikes. Okay. So they’re closer to biological neurons. Okay. Secondly, they are much easier to be implemented on, uh, neuromorphic hardware, which is designed for this purpose. And I think that neuromorphic hardware and any new technology that is trying to consume less energy is the future. So yes, they are efficient in that sense. They are, uh, faster. You can implement, uh, unsupervised learning rules much more easily on, on these systems. Um, and you don’t have to incorporate all of the complexity of the biological neuron like we did with huling and hackley models that we also use in the lab extensively. So that, I think there are a very good compromise between the detail networks and the very, very abstract networks that we use in, uh, in artificial neuro, uh, systems.  

Paul    00:56:03    How do you decide what level of biological detail is the right level?  

Yiota    00:56:09    It a matter of a  

Paul    00:56:10    Question. Your computing resources instead, I’ll, I’ll just leave it at as an open question. Instead of putting answers in your mouth,  

Yiota    00:56:15    We try to follow outcomes razor. So as simple as possible, but not simpler to answer the particular question at hand. Mm-hmm. So, if you want to study receptors, let’s say how the N M D receptor or how, how sodium channels influence the dread integration, I would go to a detailed bare physical model, because that’s the only way to capture the kinetics and the special temporal interactions between mechanisms. In a, in a dendra, you have to incorporate the morphology and the conductance of the other channels and all the gating parameters. So you want to go very subcellular, you need a detailed model. But if you’re interested, let’s say in how dendra impact circuit computations, then you don’t really need all these detail. You just can abstract these, uh, local computations into a mathematical function, a transfer function, like, uh, you know, the induction of a long-lasting spike, let’s say.  

Yiota    00:57:07    Not a purely mathematical one, like a sigmoid, but something that somehow incorporates the temporal dynamics of dent rates and put it in a spike in neural network. And then now you can build a big network because you don’t have very expensive subunits. Now you don’t have tons of differential equations. You have a few, and then simulate large circuits and see how the androids in influence circuit computations. And again, if you want to go to machine learning and see whether you can make a difference there, by adding, let’s say, some of the deri fixtures, you can also build classical artificial neuron networks, you know, that don’t consider time like spiking neuron networks. So we do all, all three, let’s say, approaches in the lab.  

Paul    00:57:48    So this is what, um, well, maybe you can just say a word about dfy  

Yiota    00:57:53    Dfy. So, DFY is a new tool that we just published, uh, in January, was published in Nature Computa, uh, nature Communications. It’s a tool that, uh, is very close to my heart because the idea is to convince the community that you can put Dres into your model in a very, very easy and straightforward manner. So essentially, we find a way to describe, um, the non-linearities without using the Kami and hack equations, which are computationally heavy, but also, you know, mathematically demanding for some people, right? Mm-hmm. <affirmative>. So we are describing the non-linearities with, um, event-based, um, mathematics. So it’s very simple. You have an AppSu and asu, let’s say current to simulate an action potential, a sodium maximum potential in theEnd Ratee. And similarly for an M D A and calcium spikes, we’re now building in those, uh, aspects. In fact, we initially, they were not there. Uh, so this is a tool, it’s very well documented. Uh, it’s built on Python. Uh, we have, uh, examples in there for any naive user so that they can go in and play. And we are maintaining the tool in the lab. And we, in fact, we are gonna give, um, uh, a technical workshop in, uh, the c n s meeting, uh, this summer. So if people are interested, they can attend that. And we are looking forward to convincing more people to work on dand rates.  

Paul    00:59:14    How, how’s that going? The convincing? I think it’s going  

Yiota    00:59:18    Very well so far. Yeah. Yeah. I see a lot of, uh, publications, in fact lately, um, incorporating dentr, not necessarily through denr, because this was just developed, right? But people, a lot of people are getting more and more interested about dent rates, and especially with respect to their impact on the machine learning and AI field. Hmm. Which is great.  

Paul    00:59:40    Yeah. Well, I don’t know. Is it, I mean, it is, of course. Do you think that it, so, oh, now I’m gonna ask about the AI field. So, I mean, I don’t know how much interaction you have with people in AI in like the industry side, right. Applications and stuff, but  

Yiota    00:59:54    Not much, I have to say.  

Paul    00:59:55    Okay. So they, they don’t give a damn about din rights, I’m sure. Yeah,  

Yiota    00:59:57    I know <laugh>,  

Paul    00:59:59    Um, someone was, I can’t remember who it was, was talking about, uh, it seems like almost every lab, especially computationally oriented labs, considers like building, well, we’ll build this standard tool that, and make it open source, and then the community, um, will use it. Um, and, and we’ll share it. And, and, and it takes a lot of effort to probably to build something like dfy. Um, and then often what happens is you build the tool, you put it out there, and then you, you know, you’re expecting like a lot of collaboration and, and, um, to have interactions and stuff. But pe you almost have to like, sell it to the community, right? Mm-hmm. <affirmative>, um, how much of, how much of what you’re doing, by the way? Well, how much of what you’re doing is actually, you know, getting out there and saying, use dfy. Here’s the thing, you know, or, or people coming  

Yiota    01:00:45    To you. I don’t do much selling, to be honest. I know I haven’t done it for two reasons. First of all, I think we have to convince people to be interested in dendrite and then to use Denti, right? Because the Dentes community is not that large anyway. Yeah. So I’m definitely focusing on convincing people that dent rights are important and they should be looking into them. Um, dendy essentially, we didn’t really initially wanted to make a tool. It’s the work of a PhD student that just wanted to make his life easier. Ah,  

Paul    01:01:13    Okay.  

Yiota    01:01:14    Because he wanted to build, uh, spiking models and, you know, he wanted to avoid, uh, uh, complicated equations. So he came up with identifi. So, yeah. So we, we did not plan to generate the tool for the community, and maybe that’s why we were not selling it as much, but there are a lot of people that expressed an interest so far, uh, to use it. And they reached out and we’ve helped them. And I definitely believe that it’s not enough to make a tool and just put it on a website, because people need to be trained to use it, right? Yeah. So you have to offer some, um, training opportunity. And in fact, it’s a responsibility if you want the community to, you know, take advantage of what you created. You have to show them how. Right? And that’s why we are organizing some, uh, training workshops, uh, for that. Uh, but yeah, but we don’t plan to become tool developers. We just want, you know, whatever we think is helpful for us to make it available to the community who is interested in <inaudible>.  

Paul    01:02:13    But now you can’t help it, but you feel that responsibility. It’s, it’s taken on a larger role, um, than originally planned just to solve one student’s problem.  

Yiota    01:02:23    Yeah, that’s a good point. I’m hoping the student will still be interested in maintaining it. <laugh>.  

Paul    01:02:28    Oh, running a lab. I’ll never know. Yeah. <laugh>, perhaps. Um,  

Yiota    01:02:33    Maybe you’re better off <laugh>.  

Paul    01:02:35    Oh, I don’t know. What do you think? You think I’m better off now? I actually, I’m kind of like looking for a job. I don’t think I’ll be running a lab necessarily, but, um, well, maybe that’s a separate conversation. Uh, running a lab, yay or nay,  

Yiota    01:02:48    Running a lab is difficult, especially if you are in Greece. Oh, no. Funding resources are very limited. We already mentioned the lack of collaborations and, you know, local health community  

Paul    01:03:01    Dendrites, which there are like seven other people in the world studying.  

Yiota    01:03:04    Yes. Yeah. I mean, that may be good because it offers, uh, a niche opportunity, but it doesn’t, uh, you know, make sure that we have enough funding for it. Oh. So, you know, the efforts to secure funding are exhausting. I think they’re exhausting for everyone nowadays, and that’s why many people leave academia, unfortunately. Mm-hmm. But yeah, it’s a lot of work and, um, a lot of responsibility toward, towards the people in the lab, right. That depend on you securing funding, uh, for their future. So, on the other hand, it’s super fun, and it’s so rewarding because we’re always trying to figure out something new. Mm-hmm. <affirmative>, I mean, I love my job, right? I wouldn’t change it for anything, but it’s certainly very stressful.  

Paul    01:03:45    So I just moved to Pittsburgh, Pennsylvania, where I did my graduate school, um, and there are a couple people, couple friends still here, and I’ll reached out to ’em saying, Hey, it’d be great to go, you know, grab a coffee or a dinner sometime. And they’re, they’re like, yes, great. Uh, but it needs to be like two months from now, you know, because they, they’re running on quote unquote running on fumes, you know? And I don’t think that that never changes. That’s it. Well,  

Yiota    01:04:10    I think maybe the US is worse than Europe, but Yeah, it’s, I mean, if you’re running a, a successful lab or you want to have a successful lab, it means working over time, working too much. Yeah. And, you know, I often wonder whether it’s worth it, because there are other important things in life, family health, uh, you know, enjoying life, and  

Paul    01:04:29    You’ve already ruined your children by passing on your stress  

Yiota    01:04:33    <laugh>. Well, you know, they don’t want to be scientists, so. Well,  

Paul    01:04:37    Good for them. Good for them.  

Yiota    01:04:38    Good for them. Yeah. Yeah.  

Paul    01:04:40    Um, let’s, let’s talk a few more, uh, minutes, uh, about, so, um, alright, so you’ve created this dfy, and it was when in Nature Communications, which I don’t, was that a hard sell getting it? Uh, is that It was easier.  

Yiota    01:04:53    Easier than I expected.  

Paul    01:04:54    Oh, really? Yeah. I don’t, I wouldn’t, I imagine, I don’t know, like, I don’t, I haven’t like watched and I don’t have a track record, but, um, like it’s essentially a resource, you know? Mm-hmm. <affirmative> and <affirmative>. It’s a tool.  

Yiota    01:05:04    Yes.  

Paul    01:05:05    It’s a tool. And I maybe that’s, maybe that’s what nature communication specializes in, and I’m just naive.  

Yiota    01:05:10    No, no, no, it doesn’t. It, uh, I think the reason they really liked it is because it bridges multiple fields. So it’s good for modeling in neuro science. Yeah. But you can also use it for, uh, machine learning and artificial intelligence applications. And it’s readily implementable in neuromorphic computing. And we also, with it, we proposed a framework about why the androids are important and how you studied. So it was, um, a bigger thing that spans multiple communities, and I think that’s why they, they really liked it.  

Paul    01:05:40    Okay. Yeah. Okay. So, way back when I had Nathaniel dah on my podcast, and, um, he was talking, so he is an, uh, traditionally computational, uh, cognitive scientist, neuroscientist mm-hmm. <affirmative>. And we got, oh, okay. We got into a conversation about, oh, I’m sure you do. Yeah. Through neuroma and everything. Um, we got into a conversation about how he, he and his friends, lab, lab advisor, friends, principal, investigator friends, were thinking hard about whether they should start an experimental lab because mm-hmm. <affirmative>, they have all these theoretically driven, um, computational, uh, research, research, uh, approaches, and that they make predictions. But then you have to like, convince people to do the experiments. And the question is, well, do we, should we just start our own wet lab to like, do the experiments that our computations suggest, our theories suggest? And I said, no, but you didn’t take that advice because you’re, uh, you have transitioned your lab to become also an experimental lab. How has that transition gone? Was that, was that a wise move? <laugh>?  

Yiota    01:06:47    Well, it’s kind of early to tell, because we haven’t published our first experimental paper yet. Uh, until that moment comes, I don’t think I can answer this question, but I can tell you why I did it, and I did it because, well, one reason is, as you said, we, we, we have to convince our, uh, collaborators to do the experiments we won. And that rarely happens. In fact, most of the, and I am a strong believer of collaboration. I have a lot of collaborations with many people, but most of the times it’s, you know, the model comes in to explain the experimental data, which are generated by the experimental labs. Right.  

Yiota    01:07:23    Um, and I really wanted to be able to have a, a control of the type of experiments that we do so that we can test our own predictions. As you said, it’s, it’s, it’s, let’s say, the satisfaction that you get by making sure that someone looks into all this hard work that you’ve done over years Yeah. To generate these testable predictions. The other reason is because, you know, the animal is the real thing. The model is a model. It’s great, it’s an amazing tool, right? It can allow you to do experiments which are not even feasible experimentally. So it has an immense amount of power, but the real thing is what happens in a biological brain. And I really wanted to get a better insight, um, um, you know, of these processes. It’s been fun. Uh, it’s much slower than I expected. More expensive than I expected.  

Paul    01:08:13    The development or the actual research, or like the devel, like getting the both labs set up and Okay. Both. Yeah,  

Yiota    01:08:19    Both. Both. I mean, in behavioral experiments that we do right now, they last, uh, a month and then you have to wait another two months for the animals to be at the right age and then start again. And it’s taking forever. Uh, which is something I had not really considered because modeling is much faster.  

Paul    01:08:35    Thank you for saying that. I’ve been, anytime I ask that people say, well, no, it’s not necessarily faster. I, it really is faster, right?  

Yiota    01:08:43    It is faster. Yeah, it is faster. I mean, it can take a very long time to troubleshoot, to find a bag, to optimize a model, but yeah, no, it’s faster. It’s definitely faster. It’s not significantly easier. I wouldn’t say that. In fact, I think that doing experiments is easier because you need to have really good skills with your hands. Um, whereas for modeling, you’re really good, uh, need good skills with your brain  

Paul    01:09:07    <laugh> and with lab experiments, you’re, you’re doing a lot of waiting around often also.  

Yiota    01:09:11    Yeah, yeah, exactly. Which is nice. It’s, you know, more social, more relaxing. It gives you more time to think. Yeah. Um, yeah. Yeah. But producing results is faster with models and with experiments, for sure.  

Paul    01:09:24    So when can we expect the first, uh, paper, experimental paper?  

Yiota    01:09:27    Well, not very far. But then again, I don’t have a good feeling of what reviewers will ask. Right.  

Paul    01:09:34    Oh, oh, right. Yeah.  

Yiota    01:09:36    Cause you have to submit it first and then see, uh, you know. Yeah. And it’s also always the fear that when you are a newcomer in the field, you have to convince people that you can actually do experiments. So yeah. Good luck.  

Paul    01:09:49    Good  

Yiota    01:09:50    Luck with that. Thanks. <laugh>.  

Paul    01:09:51    These are rodents though, right?  

Yiota    01:09:53    Yes. We’re working with mice and we are doing the in vivo head fix behavior, looking at the prefrontal cortex and looking at behavioral flexibility and, uh, how the bed rates or, uh, you know, spines and spine nova may contribute to behavioral flexibility. And in fact, we started this project with, uh, funding from Germany, uh, together with Matthew Arcu. So we have an aspect of the work that is done in, in Matthew’s lab, actually. Mm-hmm. <affirmative> mm-hmm. <affirmative> funded by German grant. And now we’re continuing it, um, here at home at I bb. Yeah. And I’m actually very happy because we, uh, have set up the first two photo microscope in the institute.  

Paul    01:10:33    Congratulations. Yeah.  

Yiota    01:10:35    So this is gonna help others as well.  

Paul    01:10:38    I think it helps, um, create excitement around dendrites that people like you and like Matthew are, are working on them and doing such interesting work. I mean, you’re, you’re good, um, spokespeople for the DN rights community, maybe I would, I would think.  

Yiota    01:10:51    Thanks. Yes. We’ll try our best.  

Paul    01:10:54    I asked about the rodents because, um, ultimately we’re interested in human brains, right? Uh, a so, so first of all, you start many of your talks with a slide that shows the progression of dendrites over the course of our lifetime. Yeah. And it’s pretty striking, actually. Uh, you and I are right now at the peak of our dendritic arborization in our lifetime. Maybe you are  

Yiota    01:11:19    <laugh>.  

Paul    01:11:20    Well, I’m at my peak, but that ain’t saying much for me, <laugh>. Um, but it, it really shows, it’s pretty striking because like, you know, from birth, you have these neurons with, with kind of small branches, right? At least right on these, in this, on these slides that you show. Yeah. And then from I think 30 to 60, they, they get bigger and bigger. And then, you know, the, there are these beautiful outstretching trees when, uh, you’re your age, uh, and then we’re, we are gonna start to decline, you know, sooner rather than later, I suppose. And then they kind of shrink back down again. And you show like, uh, dementia and Alzheimer’s, um, and how they, they, you know, shrink back down in different condi conditions and just with aging, but, and those are human cells that you show, right? Yes.  

Yiota    01:12:02    Those are human cells. So  

Paul    01:12:04    I’m not sure about the difference between human aging and other organism. For example, mice. Um, I’m sure that you know the answer to this, like the, the dendritic arborization, you know, along there,  

Yiota    01:12:14    Actually, I don’t, I should look it up and have a similar slice for, uh, a slide for mice. Uh, I bet it’s similar, but I have not looked at the, you know, the same figure, uh, for mice. Their cognitive,  

Paul    01:12:26    Their cognitive peak must be like  

Yiota    01:12:29    Around 30 days every day.  

Paul    01:12:31    I was gonna say like two months. Yeah.  

Yiota    01:12:32    From 30 days to two months. They are adults. And I think that’s when they have their, well, up to six months, they’re still considered, uh, young, and then they start to age. But I have not seen a respective slide with, uh, you know, thetic anatomy for these animals. So, uh, it should be better prepared. I’ll find one for my next talk.  

Paul    01:12:50    Yeah. Please prepare yourself better. You know, so this is ridiculous. Yeah.  

Yiota    01:12:54    Yes.  

Paul    01:12:55    But are are human dendrites special,  

Yiota    01:12:58    Huh? Excellent question. Yes. Well, some people think so. I mean, we even thought, so three years ago when we published this paper with Malar in Science, where we discovered that there was a new type of atic spike in human dendra that was not previously seen in rodents, and which allowed these dendrites to solve complex mathematical problems like the exclusive war. But then a year later, the same kind of the dread spikes were found in, uh, in rats. Oh, yeah. Yeah. So at least that particular aspect was not what makes us humans was not what, what, you know, differentiates the dendra of Rodin from the dendrites of mice. It’s, it’s a very interesting feature, nevertheless, because it was the first time that we discover a mechanism that has a non-monotonic activation function, as we discussed previously. Uh, we use sigmoid Activation Act functions, which are monotonic to model dendrites in abstract models and in detail models as well.  

Yiota    01:14:02    And this was non monotonic. So it goes up and then it falls. Mm-hmm. Uh, which means, uh, and that’s why it allows, uh, you know, the dendri to solve the exclusive or problem, which requires an non monotonic function. Um, so it’s interesting to know that there are these kind of computations in dendri as well, even in, in rottens. We had not considered that before. But whether there is something else that is unique, I mean, we know there are differences, right? There are differences in the size, there are differences in the com compartmentalization, human and right. Are much more compartment compartmentalized than, um, um, ma mice. Uh, uh, what neurons  

Paul    01:14:40    What do you mean compartmentalized? Like smaller bra? Like more, uh, so  

Yiota    01:14:44    They, they are longer, first of all, they are bigger. Yeah. And the compartments, uh, they don’t communicate with the cell body with the same strength as they do in the mouse, uh, neuron more subtlety. So there is much more, uh, attenuation of the signal to reach the cell body, uh, which means that they are more independent because they don’t talk to each other as much as they do in the mouse, and they don’t talk to the cell body as much. So you can think of them as having more parallel units. If we are to think each part of the dendra as an independent unit, which could be why one reason why, you know, humans, uh, can do more difficult cognitive tasks if you correlate a number of parallel units in the mouse and the human. Mm-hmm. That could be, you know, I mean, one reasoning, let’s say,  

Paul    01:15:30    Yeah, I guess we could tell ourselves a bunch of justso stories about this though too, right? I, I immediately thought, well, if more compartmentalized, uh, more attenuation, that means like you can have like subtler differences and finer scaled and thus higher capacity. But I’m just telling myself a story there, perhaps.  

Yiota    01:15:46    Well, I mean, we know that if you have more compartmentalization and every compartment is non-linear, then you do have a higher capacity. I mean, we’ve known that, that known that for mathematical models. So it’s not the story. I think you can, you know, it’s a convincing story, let’s say. Okay. Um, whether that is the reason we can solve more difficult problems, though that’s, you know, hypothesis.  

Paul    01:16:10    Yeah.  

Yiota    01:16:11    Because you have many such neurons and, and circuits in the mouse brain that maybe is sufficient to solve the same type of problems that you would with fewer neurons in the human brain. Hmm. So we don’t, we cannot tell that this is a differentiating factor. It’s a matter of scaling up if it’s just a number of compartments. It’s just a scale up version. And is that the reason we are, why we are better? Is it a matter of scaling up?  

Paul    01:16:40    We have, I mean, we also have like a larger brain to body ratio, and we also have opposable thumbs, which helps  

Yiota    01:16:47    <laugh>,  

Paul    01:16:47    Right? Yeah. Um, somewhat. Has your career gone? Let’s see. Have you made the right choices? Did you get in the right field? Are you doing the right thing? Are you happy with the, the trajectory and the, the, the journey? Yeah, I  

Yiota    01:16:59    Think I’ve done a lot of wrong choices that ended up putting me in the right spot. Oh.  

Paul    01:17:05    Oh, that’s a nice way to put it.  

Yiota    01:17:06    Yeah. I mean, I’m very happy where, with where I am right now and with my, uh, life and career right now. But, uh, you know, the path was not easy. Uh, as I said, I, you know, I couldn’t study neuroscience early on. Then I went back somehow, then I returned to Greece and there was no competition, neuro science. I did buy informatics for 10 years, so I had to change my research direction and then wait for an opportunity to come back to neuroscience. It was complicated. Uh, yeah.  

Paul    01:17:34    Luckily your brain is very plastic with all those  

Yiota    01:17:37    D Exactly. I think I feel fortunate that I was able to, let’s say, take advantage of all of the misfortune, if you want to call them like that, or opportunities to deal with different things. So I ended up being what I wanted, even though the, you know, the road was not the easiest one. Yeah. The path was not the easiest one.  

Paul    01:17:57    Okay. Bta, I have one more question for you, and I don’t know if you can answer this, but if, if you kind of zoom way out and just think about how you have viewed dendrites with respect to their importance and role and computation and life and intelligence. H how have your views changed over time? How have they evolved?  

Yiota    01:18:19    Well, the, in fact, they have, because initially I was thinking that dendrites are really, really important because they allow the brain to do more advanced, uh, let’s say computations that we could do, uh, with the point neuron systems mm-hmm. <affirmative>, which is true, right? I mean, uh, a point neuron cannot solve a nonlinear problem, whereas a neuron with dendrites can, and we’ve shown that, right? But I now kind of think that their more important role is not necessarily their ability to solve more complex computational problems, but they do this as a means of increasing the efficiency and the, you know, power savings and the resource savings of the brain. And that you could do the same stuff with point neurons, but it’ll be so much more expensive. Mm-hmm. So now I look at them from a slightly different perspective, trying to figure out what are the advantages of, uh, that are, are, uh, offered by dendri in, uh, um, in circuits in large neural network systems that are not necessarily focused so much on the improved computational capabilities, because, you know, we’ve known they are there, right? We’ve shown them, but with respect to, uh, efficiency, speed, energy, you know, making a system more sustainable and how we can take this knowledge and use it in other fields as for the fields term of machine learning and ai, for example.  

Paul    01:19:51    I’m still, so what I want to happen, I want to be more excited about efficiency just as a, as its own. I know it’s important, but just from an intellectual curiosity vantage point. Like, I, I want to like think, oh yeah, efficiency is really cool,  

Yiota    01:20:06    <laugh>, okay, let me give you some examples. Okay. Edge devices, right? Okay. If you want edge devices, you know, devices that  

Paul    01:20:13    Edge computing, yeah. So  

Yiota    01:20:14    Yes, edge computing. So if you want chat p t or the next version of it to run on your smartphone or even something smaller mm-hmm. You have to fit it in, right?  

Paul    01:20:25    Yeah. That’s the, i, I know that’s important  

Yiota    01:20:28    That, but that doesn’t, you don’t care about all, let’s say the sustainability of, uh, of our planet. We cannot burn in  

Paul    01:20:35    The energy. I know it’s important. Energy. I know our planet is important. It’s just, there’s something that, that’s not like, uh, the wonder in awe of our intelligence is not contained. I know efficiency is part of the story, but I’m thinking more of the capacity and the plasticity and all, all of that stuff is like cool to me, right? And efficiency is, is necessary.  

Yiota    01:20:58    Yeah. I see what you mean also. But all of these properties, why are they more cool? Why are they cooler than, you know, doing all the kind of problem solving, all the kind of problems that you’re interested in, in an, and doing it in an efficient way because you need to have both, right? Efficiency on its own is not enough. You need to also have very powerful computing systems.  

Paul    01:21:24    Mm-hmm. <affirmative>, right?  

Yiota    01:21:26    Right. I  

Paul    01:21:27    Mean, well, not if you’re super, super, super efficient, right? You then you can reduce the computational power if you can just run a lot faster and with less energy. Cuz it would just do this, it’ll take longer, longer to compute, but it’ll be compressed in a lower efficient, lower power efficient system. I don’t know. I’m not a computing, uh,  

Yiota    01:21:45    No, no, no. By the, what I’m saying is that we want to have smarter systems, right? We want t to be better. We want whatever the AI to reach the generally, well, I’m not sure we do actually him, uh, resign, right? So this is, yeah. A controversy issue, let’s say. But let’s say if we wanted to have systems that learn like the human brain, they don’t forget, they don’t mess up things. They’re, they’re great and you run out of technologies to generate them because they are not efficient. You will never get there. So you need to have both, you need to have a way to build intelligent machines, and you need to have a way that these intelligence machines, uh, you know, can run on a laptop.  

Paul    01:22:29    Here’s why I’m not exci, here’s why it’s not exciting to me because I, what, what is exciting to me is the explanatory part, the understanding.  

Yiota    01:22:36    You’re right. I should have mentioned that. That’s a great point. So the other reason we are interested about understanding how dendrites achieve efficient computations is because we want to understand how they do it, right? Yeah. And in a computational model where you explicitly simulate dendrites, and you look at the readout impact at the circuit level, for example, and you measure the differences within without dendra, and you try to look at what is it that d rate coat if you have them, and what do you miss out if you don’t have them? You increase the explan explainability of these systems, you know, massively. So you no longer have a black box. That’s, that’s another reason why we care a lot about understanding how dendrites contribute to, you know, advanced computations.  

Paul    01:23:25    Okay. You’ve taken me one step closer to <laugh> to caring about the, to,  

Yiota    01:23:29    To caring. Otherwise you wouldn’t care about d about Dres. That’s terrible. That’s really sad. You have to  

Paul    01:23:34    Say that. <laugh>. Yeah. No, I, I think it’s exciting. I think it’s really cool. As I started, you know, this podcast, it’s this episode, it’s, it’s just daunting. You know, it’s a, it’s like one step further removed from being able to grasp what’s going on, right? Because there’s so many more dendrites and so many different types of plasticity within the dendrites and configuration and, and you know, as you’ve, uh, demonstrated  

Yiota    01:23:59    Daunting. Yeah. I mean, biology is complicated, right? Yeah. But we’ve, we’ve come a long way and now we know so much more than we knew 20 years ago. And now the technology has advanced so much that we can manipulate dendri in vivo in the behaving animal. They’re like five or six laps. We can do it, but they can do it now. Right? And we found really exciting things experimentally that they underly perception that they may be responsible for, uh, the, for anesthesia. These, these studies are from Thema. Mm-hmm. They may be implementing predictive coding across different regions. So we’ve, we’ve learned a lot of stuff about why androids are important in the biological brain, right? We, if we can take some of this under understanding and transfer it to artificial system, that’s already a major win, I think.  

Paul    01:24:50    Yeah. But the, but the AI community doesn’t care.  

Yiota    01:24:54    Well, they actually, you are wrong on that. There is a lot of papers right now coming out talking about the need to come up with more sustainable technologies. Sure. And in that search, in the search of more sustainable technologies, I bet they will care. They are already care about bio-inspired properties that they want to add to their systems in the hopes of making them more efficient. Then rates are unfortunately not on the map for them. And I think they’re not on the map yet because they are thinking of them in terms of the computing capabilities, which they offer to the systems as additional units. I see. And not for their, um, efficiency properties. And that, that’s why I think it is important to understand them and characterize them because they will become a key player in the years to come.  

Paul    01:25:38    What do, what, what are the things that they have their eye on in bio-inspired, like neuromodulation, modularity,  

Yiota    01:25:44    Neuromodulation, different plasticity rules, biological plasticity rules, you know, restricted connectivity, sparsity, things like that. Mm-hmm. <affirmative>. Um, some people are starting to look at them, right? Some people have looked at the, the dirty action potentials that we published. So the human, like, let’s say features, but again, in hopes of providing more advanced computational power, um, with the systems, which I really think that’s, that should not be the main focus, um, for the by inspiration.  

Paul    01:26:12    All right. Yotta, Yotta, I think that, um, I’ve taken you far enough. I appreciate all your work and bringing din rights to the fore as well. And I hope it continues to, to grow. And although you’ll continue to stand out, especially now that you’re gonna be doing, uh, experiments to back up your modeling. So thanks for joining me here. I really appreciate it.  

Yiota    01:26:29    Thank you so much, Paula. I had a lot of fun. I hope people enjoy this.  

Paul    01:26:49    I alone produce Brainin inspired. If you value this podcast, consider supporting it through Patreon to access full versions of all the episodes and to join our Discord community. Or if you wanna learn more about the intersection of neuroscience and ai, consider signing up for my online course, neuro ai, the quest to explain intelligence. Go to brand inspired.co. To learn more, to get in touch with me, email Paul brand inspired.co. You’re hearing music by the new year. Find them@thenewyear.net. Thank you. Thank you for your support. See you next time.