Brain Inspired
Brain Inspired
BI 118 Johannes Jäger: Beyond Networks
Loading
/

Support the show to get full episodes and join the Discord community.

Johannes (Yogi) is a freelance philosopher, researcher & educator. We discuss many of the topics in his online course, Beyond Networks: The Evolution of Living Systems. The course is focused on the role of agency in evolution, but it covers a vast range of topics: process vs. substance metaphysics, causality, mechanistic dynamic explanation, teleology, the important role of development mediating genotypes, phenotypes, and evolution, what makes biological organisms unique, the history of evolutionary theory, scientific perspectivism, and a view toward the necessity of including agency in evolutionary theory. I highly recommend taking his course. We also discuss the role of agency in artificial intelligence, how neuroscience and evolutionary theory are undergoing parallel re-evaluations, and Yogi answers a guest question from Kevin Mitchell.

Transcript

Yogi    00:00:03    They’re presented as an explanation of what’s going on, why other than they don’t really explain anything right. That, that was one of my problems. Um, they just showed that the system is complicated. Basically. I wouldn’t even call it complex. And so, um, I became frustrated with this is you’re really a process thinker, and I think that’s really important here. You need to get, let go of those, those fixed structures. I mean, we, we can only study small aspects of development and evolution using dynamical systems theory, but we cannot capture the agency of the organism so successful that we’ve just completely forgotten all the other stuff that we’ve thrown out to make it work in the first place. And it’s time to get back to that because a lot of the problems we have right now are in understanding our situation in the world and then understanding truly complex systems that have agents in them. And of course, neurosciences are completely included in that  

Speaker 0    00:01:09    This is brain inspired.  

Paul    00:01:22    Hello, it’s Paul. On the episode today, I have a chat with Yohanis Yeager who also goes by Yogi, which is what I call him during the episode on his website, Yogi bills himself as a freelance philosopher, a researcher and an educator. And he’s actually done a lot of empirical research in systems science and evolutionary biology and a range of interdisciplinary topics as well. The reason he’s on the podcast is because I recently took his online YouTube course called beyond networks, the evolution of living systems. So the course covers a lot of ground, uh, but it’s roughly about how, because of the complexity of us as biological organisms functioning in a highly interactive and complex environment, we need to rethink evolutionary theory. And Yogi makes an argument that we need to add a new perspective to evolutionary theory that accommodates a role for agency as biological organisms.  

Paul    00:02:24    And the course has the title beyond networks, because within this agential perspective, we need to somehow move beyond the dynamical and mechanistic explanations that we currently use to study things like gene regulatory systems, which are traditionally thought of as networks of interacting genes and products of those genes and so on. So I wanted to have them on, because first of all, I really enjoyed his course as you’ll hear. Uh, but also because his argument applies equally well to explaining brains, which are in the same complexity realm as organisms, obviously. And given that on this podcast, we often talk about using networks like deep learning networks to, uh, explain intelligence. I think that yogis is an important message to consider. So I highly recommend the course note, the term highly recommend because fair warning. If you do watch the videos, your reading list will exponentially increase with all the books and papers that he quote unquote highly recommends throughout.  

Paul    00:03:25    We also have a guest question from Kevin Mitchell today who was on the podcast recently on episode 111. I linked to Yogi’s blog and website and to the course that we discuss in the show notes at brain inspired.co/podcast/one 18. If you find the podcast valuable, consider supporting it on Patrion, we just had our first zoom presentation and discussion group through the discord server that I run for Patrion supporters. This one was about the landscape of cognitive science and it was a lot of fun. So I look forward to having more of those in the future to support the show, just click the Patrion button@braininspired.co, all right. It was a pleasure having Yogi on, and I hope that you enjoy the discussion as much as I did. I came on to you. Um, and in fact, uh, we’re going to talk about your online course on YouTube called beyond networks, the evolution of living systems.  

Paul    00:04:20    And I I’d like to say I came onto you through academic means, but I think YouTube figured out that I was looking for biological autonomy topics because I had read all of our own Moreno and Mateo Moscow’s book on biological autonomy. And I was either searching for it or YouTube knew that I wanted to search for it. And then that’s how I came across your course, which, um, I just want to say is I, I really love this course and, um, I’ll probably recommend it in the introduction, but I, I just want to reiterate that I recommend it to all my listeners to check this course out. But before we talk about that, I would love for you so that I don’t botch it to introduce yourself and talk a little bit about your, your background and, and the empirical research that you’ve done. Uh, and then how you’ve sort of transitioned and your trajectory to your current thinking.  

Yogi    00:05:13    Well, thank you very much. First of all, um, it’s, it’s really nice to hear that the lectures are sort of reaching beyond their initially intended target audience, which it was sort of accidental. That’s really nice to hear, um, uh, evolutionary biologists systems biologists I had for years been a researcher in the lab myself, and then have an empirical was the head of an empirical lab at the center for genomic regulation in Spain. And I was looking at the evolution of gene regulatory networks that are involved in the early development of, um, especially fly, stick insects. But the aim was to sort of learn general principles, it’s network evolution. And I was using dynamical systems Siri for, uh, my work. And, uh, I guess it’s, I’ve always been a bit of a philosopher. So I was reading philosophy as a high school student. I was interested in the philosophy of science while I was a student.  

Yogi    00:06:15    Um, and I read beyond the classes that I took about the philosophy of science, but it was at that time when I was a sort of a, a PhD student still really, I noticed that we had a really hard time publishing our work at the time. The field was very hostile to the sort of modeling studies. And I also realized that that reviewer students sort of criticized the methods that we were using, but they didn’t get the questions that we were asking. And so I took this step back and I was wondering about what kind of questions do scientists. This set me on a trajectory that got me into becoming the director of a small Institute for the philosophy of science, which is called just outside Vienna a few years back. I didn’t stay there for very long for various reasons, but since then I’ve continued on this philosophical trajectory.  

Yogi    00:07:10    And during my time at that Institute, I could make some really fantastic connections. Scientists don’t usually get in touch with philosophy was a science, but I had all these people that I was working with. And there’s some really good people out there that know a lot about not just the science that we’re doing, but also how we do it. And it’s a pleasure to be working with several of them now in collaboration. So my work has taken that philosophical term, but I’m still doing biology. I would call it philosophical biology. It’s a type of theoretical biology that I would put famous people like Conrad, how Waddington into that has been on the back burner for the last 50 years or so. And I think it’s high time to revive it. Can I ask,  

Paul    00:07:53    So you said that you were studying or reading philosophy in high school and interested in it. And I did too, but I really didn’t understand looking back. I really didn’t understand it. I didn’t have the same grasp on it that I believe I do now. Of course, that’s probably not true also. But do you feel that same way or did, did you get it back then?  

Yogi    00:08:12    No, absolutely not. You also read the, you know, of course I still like that book, but as you said, the context matters a lot and, uh, I am definitely reading very different things right now. Uh, this was not a sort of a plan’s trajectory. I meandered through a lot of this explore. This is something I use in my work on academia as a system as well, is that we have to have space to explore. And a lot of it is serendipity when you give yourself the space and the time to explore, which is very important in my own trajectory. What would  

Paul    00:08:55    You say right now is the balance between your philosophical work output? Let’s say and empirical, because I know you’re working on multiple philosophy, uh, manuscripts,  

Yogi    00:09:08    I just completely left empirical science. My labs shut down in 2015. Um, I’m still carrying on some of the specialized work in evolutionary development and evolutionary systems biology, uh, through our work, uh, on, on concepts of process homology, muscularity dynamical, modularity, and so on and so forth. But, um, I would say I’ve moved on, especially in my scientific work with what I call philosophical biology and interested in the concept of agency and its role in evolution, which is probably something we’ll talk about today. So I I’ve taken a turn, um, an irreversible turn away from empirical sites, I would say.  

Paul    00:09:48    Okay. So let’s, let’s talk about, um, yeah, you mentioned agency, um, and its role in evolution and that’s kind of the focal point and end point of your course beyond networks. One of the things that I found interesting that sort of lit me up was I just see this parallel between what you’re talking about in, in tying together, genotypes and phenotypes, and how to understand evolution in the complex systems that we have and how development plays a role, uh, in that process and using dynamical systems to, to model that. I just see this massive parallel with what’s going on in modern neuroscience as well. So that’s why, uh, I thought immediately, oh, I’ve got to have you on, um, because I wanted to explore this and I haven’t thought deeply about making, you know, super close ties and, and exploring what it means for neuroscience, but this is something that, uh, I want to pursue further. So, um, I don’t know if you, so I don’t know how familiar you are with the modern landscape of neuroscience or at least, you know, one, one facet of it that we talk a lot about on the podcast, but maybe, maybe what you could do is just give a really broad overview of this, uh, of the course. And, and then we can go from there  

Yogi    00:11:08    And two or three sentences,  

Paul    00:11:11    Two or three sentences, of course, by the way, I should say this thing is 12, it’s like 48, uh, you know, 30 to 40 minute videos. Uh, so it’s, and it’s super rich with historical perspectives, quotes, philosophical perspectives, and, uh, the modern science of evolution and genetics. So yeah, take it away.  

Yogi    00:11:32    The executive summary of this night, I was aware I’m superficially aware of what’s going on in neuroscience, through colleagues that are engaged and they work there. And I am of course, aware that a lot of the arguments I’m making in my lecture apply, um, as well, it’s not published in the field, but the central point, I guess, is that I was interested in, in, um, the limits of, uh, limitations of dynamical systems modeling because I was always claiming that I am a process thinker that, um, you know, everything explanations has to be more focused on processes and biology. And this is really important in, um, the field of genetics, genomics, but also in neuroscience because of this increasing pervasiveness of, of networks that you see everywhere and they pop up everywhere and often they’re, they’re just, you know, sort of hairball graphs and systems biology with lots of nodes and connections.  

Yogi    00:12:30    And it’s, they’re put in front on a slide and, you know, they’re, they’re presented as an explanation of what’s going on, why they, they don’t really explain anything, right? That, that was one of my problems. They just show that the system is complicated. I wouldn’t even call it complex. And so, um, I became frustrated with this and that brought me into contact with people very early on through my master’s supervisor, Fred Goodwin, and then my PhD supervisor, John Rice was pioneering this approach of using, um, dynamical systems mobile with data, um, to, to, uh, describe the actual dynamics function and evolution of gene regulatory networks. And this combination of empirical and theoretical work really appealed to me. It was really new. This was before systems biology was called systems biology. We’ll call it functional genomics, what we were doing at the time. And I claimed, I went around and said, we’re really looking at this in terms of process, but I soon became aware that the methods we were using are also still very much rooted in this network view of living organisms.  

Yogi    00:13:41    And if you, if you look at living organisms, what they do is they, they changed their structure constantly. So we capture a specific structure in a, in a dynamical system model. And from, from that point of view, it’s still static, right? You have the equations that describe the interactions and those interactions are shakes. So, uh, I became interested in, uh, went back to work. I did during my master’s thesis, uh, where I got in touch with, uh, the work by much around on Varela. And I read a lot of very low. I did a master’s in holistic science in the Southwest of England, little hippie college coach Schumacher, call it, read a little Varela and got in touch with, uh, uh, embodied cognition and activism. And so it’s a funny twist. I basically took those ideas that came from, from neuroscience, cognitive neuroscience, into the field of genetics.  

Yogi    00:14:38    So it’s come full circle. And through that, I, I became aware, uh, and sort of my employment philosophy science Institute, uh, got to know, uh, Alvara Moreno, Moscow who are doing work with agency, organizational accounts of organisms, and their theory is very much rooted in Varela’s work. Uh, and, uh, shows that that the essence of biological organization is in the constantly change and structure of the organism. It’s the south making auto poetic aspect is that, um, it never stays the same. It’s like the red queen and Alice, which rich has to run to stay the same. It changes all the time. And so, uh, there is this old argument going back to theoretical biologist, Robert Rosen, that you cannot actually model this sort of organization. And it’s very controversial. And I became interested in these sort of questions because, uh, issue you’re really a process thinker. And I think that’s really important here. You need to get, let go of those, those fixed structures. I mean, we, we, we can only study small aspects of development and evolution using dynamical systems theory, but we cannot capture the agency of the organism. And therefore, I think that’s crucial for neurobiology. If you come back to that, um, it’s, you need a dynamic approach basically, but a dynamic approach that is radically dynamic, not like dynamical systems. Right.  

Paul    00:16:14    Well, I mean, you know, as I was saying that I see the parallels between how you’ve used dynamical systems theory within this limited, uh, um, approach to, you know, model the developmental process, et cetera. And so in neuroscience, dynamical systems theory is all the rage right now to take a whole population of neurons and, uh, figure out what they are doing through there, like a trajectory through these lower dimensional spaces, uh, and map that on to eventually to behavior. So that was kind of the, the parallel that I saw. I, I wonder if I should, um, you know, what I’m going to do, I’m going to interrupt our conversation because, uh, since we we’ve talked about dynamical systems theory, so, so I’m going to play you a question from a guests, uh, before we move on. So I had, I had Kevin Mitchell on, we didn’t talk too much about his book innate, but in the book innate, um, it’s all about how development has had, you know, gotten the short shrift in the story of our genotypes and how that leads to our behaviors, our phenotypes and our behaviors anyway. So I thought he would be a good person to ask to, come on and ask you a question. So I’m going to play this question for you, and then we’ll continue our conversation.  

Speaker 3    00:17:23    Hi, Yogi, Kevin Mitchell here. I’m a big fan of the holistic non reductive approach that you and your colleagues bring to biological questions, which feels very rooted in principles of process, philosophy, and system thinking that were popular for a while at various stages in the 20th century, but which were then replaced with a very mechanistic and reductionistic outlook. It feels to me like holistic dynamic approaches are gaining traction again, probably because we now have experimental and computational tools to generate and deal with dynamical datasets. And I wonder if you feel the same way in the perception your own work is getting  

Yogi    00:18:00    Yes. Thank you, Kevin. Um, yes. Uh, I do feel that way. And as I just said before, it dynamic, even going beyond dynamical systems, beyond six structures, this is extremely important. And I think there is a, there is a big revolution coming at some point. It’s a little frustrating to see how slowly it’s catching on a lot of the empiricists have problems seeing the practical use of these things, because they’re, you know, these ideas are still very theoretical and a big challenge is to bring them to the bench basically. And that’s, you know, even though I work theoretically now exclusively that’s one of my big aims is to work towards getting those series in, in, uh, the range of empirical track, the tractability. I think that’s extremely crucial. Uh, I think Kevin is a very optimistic person and I like that. And also I, it’s nice to see that this work is seen among those people that really matter, but I wouldn’t say it’s gotten a lot of track traction in the, in the mainstream of, uh, genetics or developmental genetics.  

Yogi    00:19:14    Um, and it’s a bit sad to see how theoretical work is massively undervalued in those fields. I think one of the reasons is the technological progress has been so fast and the temptation to just produce datasets and resources has been overwhelming to a lot of people. So that we’ve forgotten a little bit about what were they earlier called philosophical biology. And I think it’s very important to get back to those conceptual questions now, and I’ve been trying to get people to sort of notice and get interested in it, but it’s really hard. I think it’s also has a social dimension of everybody’s under a lot of pressure to, to just produce stuff and these sort of questions that aren’t very conducive to, you know, career basically in today’s academic system. I have to say,  

Paul    00:20:02    Well, thanks Kevin, for the question. So sorry to interrupt us. Cause, uh, I wanted you to continue talking about, about the big ideas, um, from the course before you do that though, I am going to interrupt us again because a big thing that you talk about is process based philosophy, a process metaphysics, a process approach. And I love, uh, process metaphysics. I still find it, uh, substance metaphysics, the idea of things is so ingrained and so trained into me that I, I still have a whole lot of trouble thinking of things in processes. And I’m wondering if that gets easier, if you think of everything in processes or if you still struggle and think of things as think of processes as  

Yogi    00:20:47    Right. So, so yes, that’s a very good question. I mean, you don’t right. I mean, so basically there’s a beautiful work by Johnson, for example, that look at the metaphors, we live by the book where they, they describe something that they call the containment doctrine. They can show that very early in your, your childhood. Um, you, you form this, this vision of the world is, you know, basically I call it the Tupperware model of reality. It’s basically boxes within boxes, containers within containers. So that’s very, very deeply ingrained. And what is important here is to say that for a lot of questions and topics, you do need a process based explanations, a process based approach because it is very hard there’s I think Quine was the first philosopher who brought up the absurdity of it all he said, you know, you’d have to sort of change language, for example, right?  

Yogi    00:21:44    Subject object, sort of, uh, he used to sentence cat is the white cap is bristling towards the dog. It’s capting Whiteley a bristling dog. And so that’s taking it at absurdum. Also bore has, has a beautiful argument about this in his short story, learn, uh, Orbis, terraces, uh, mustaches, it’s called this a beautiful story. And so you don’t have to, uh, get rid of your language, your thinking, but you have to realize that sometimes this very deeply ingrained pattern of thinking is hiding, um, aspects of phenomena, of questions. It’s preventing you to ask questions that just don’t occur to you. If you think, uh, like this, for me in genetics, it’s, it’s very, um, strong because you have this idea that you can explain processes, developmental processes, behavior in terms of genes, which are things genes are. So they’re like pearls on a string, right?  

Yogi    00:22:51    And so you have a huge gap between how has that same causing any, any sort of behavior or phenotype. And that’s been neglected for a long time because we are happy with saying, you know, this gene does that, but does that even mean, right? And these are quite obvious questions that are also beautifully treated in Kevin’s work. I have to say, um, that have often been, so obviously in our face that you didn’t see them anymore sometimes. Uh, and there’s a beautiful Whitehead quote from philosopher Whitehead, where he says that it’s exactly those things that are so obvious, obviously not right, that we don’t even see them anymore. And these are the things that you have to, um, it’s actually, I used to word thing all the time. It’s terrible. So yeah, it’s exactly those aspects of reality that questioning, if you question one of those and really find something, then that’s how we’re going to dig deep insights occur. Of course, some changes in how we perceive the world. And I think that’s very true. So this is one of the challenges here is to see where, and when do you use process oriented thinking process explanations, and when is it okay if it’s okay. In many aspects, in many areas of life and science use substance based explanations, that’s fine.  

Paul    00:24:17    Yeah. In science, we’re very concerned with definitions, right? So what’s the definition of a gene. And, uh, so on the podcast, we talk a lot about intelligence and natural and artificial intelligence. And I feel like when you name something like intelligence, it reifies it. And all of a sudden it seems like a thing. And you know, I don’t, we’ll get into this later, uh, about, you know, whether what kind of thinking process thinking or otherwise to apply to these sorts of things. But I feel like the entire world of intelligence, natural artificial, whatever those words, you know, whatever intelligence means it, uh, is really would benefit from a process based approach.  

Yogi    00:24:58    Absolutely. I mean, the one thing that got reified and has a really negative impact is information. I mean, these sort of absurd claims that information is just as fundamentalist, substance or whatever, you know, it’s, it’s something it’s a way of looking at the problem, right? If you use an information-based approach, you have a certain way of looking at a problem, but it’s not like the university people, some people say the universe is made of information. Does that mean it’s just completely meaningless? And so I guess I agree with you, we have to think about what do these terms mean and what kind of work. I mean, the, one of the really important practical aspects of doing philosophical work and sciences to check concepts, to examine what is the work they do. And a lot of the concepts are using don’t do any, any real work at the moment. So I think that’s, that’s one of the reasons we have to sort of rethink that’s what I tried to do in the lecture to question a lot of those concepts that we use everyday, or actually a lot of them are just metaphors, you know, they’re not defined beyond the genetic program and so on and so forth. These are metaphors that we use very carelessly without realizing that they’re metaphors anymore.  

Paul    00:26:13    All right, well, I really derailed us, but let’s get back to, uh, agency and the idea of, you know, um, closure and well, I’ll let you, I’ll let you continue about how you think of agency and its role in evolution.  

Yogi    00:26:27    So this is actually stuff that came out of making this lecture. The lecture itself has been a process. I’ve been giving this lecture for years at the university of Vienna to about 15 students before it actually forced me to record it. And then I thought, why not put it up on YouTube? So that’s what I did. Then that’s viewed by hundreds of people, which is fantastic. All who have taken the time to do that. So intellectually I sort of came to agency very late, right? Just like in my career. And now I’ve sort of, uh, really interested, uh, in, in the role it plays in evolution. So here’s the big historical movement from Darwin Darwin’s theory of evolution was a, uh, theory of struggle for survival of the individual of the organism organism based theory. Um, it had its very big difficulties, right? Had no mechanism for inheritance and it had no mechanism for the production of phenotypes either.  

Yogi    00:27:33    So this Siri was then, uh, completely transformed in the twenties, in the thirties, through the modern synthesis and the rise of population genetics, which bracket that the organism. So it looked, uh, below the organism at the genes and it looked above the organism at the population level and it completely forgot about the organism. The organism just became the sort of interface where population level and genetic level interact. It had no importance. And of course, even Darwin knew, uh, that, that the behavior of organisms actions, choices that organisms make have, have really important consequences for evolution, but it’s become a really big taboo topic. Uh, not just because we, you know, the term mechanism, mechanistic explanation is often mistaken nowadays, meaning you have to explain something at the level of molecules, the molecular gene. And I think that’s, that’s just crazy. We’re dealing in the life of neurosciences hierarchical multilevel systems that need to be explained multiple levels.  

Yogi    00:28:40    So, and there’s no scientific reason to focus on only one level. That’s just a historical thing that happened. Right. So I think I’m interested in, in these higher levels. And I was, I was thinking, what does the organization, what role does organization play in evolution? This sort of weird, uh, biological organization. So let me, I probably have to talk a little bit about this. So please do this notion of closure, organizational closure, um, is an idea that you have to basically account. It goes back to comp. It’s a very old idea. It was explicitly formulated by, um, developmental psychologists. And the idea is that you account for all the causally important factors from within the system, basically everything that you need to continue is, has to be produced from within the system. Now it’s very important to make a distinction between that sort of closure and thermodynamic closed system, right?  

Yogi    00:29:46    So systems with organizational closure have to be thermodynamically open. They have to have a constant flow of matter and energy through the system. Otherwise they cannot achieve this organizational closure and what it ultimately amounts to is that you have a life cycle, right? So you have your, at the end of the life cycle, you have produced something that looks a little bit like you. And that is one of the fundamental principles of evolution. You need a principle, not just a variation, but you need the right kind of variation and you have to pay a lot of attention to that. Uh, if you want to be truly evolvable, a lot of people have pointed out that very simplistic concepts of evolution like replicators, you know, reductionist accounts that were, were pioneered by really smart people like, uh, John maintenance, others like Richard Dawkins, uh, you can have a naked replicator that just makes copies of itself.  

Yogi    00:30:41    It was pointed out really early on in the seventies, actually, um, by, uh, I going to choose that you get something that is called an Erik catastrophe. So if you have a molecule that replicates itself, you get errors and the errors are just linearly, accumulating. And at some point, if you have a exponential, an exponential copying mechanism, you just get errors. I mean, there’s no way to maintain a species that could be selected for, um, like that. At the other end, you have self-organization work that is about auto catalytic systems, for example. So self-organizing systems that maintain themselves in a certain state, but they can’t vary either, right? Because as soon as they vary, they’re no longer auto capitalistic. So they know we over produce themselves by definition. And so you have no variation then consider it. And so this is a really tricky problem.  

Yogi    00:31:36    And so I realized thinking about this, that you really need the sort of life cycle that organisms have, and that life cycle depends on organizational closure, this specific sort of the snake that bites its tail organization of living beings. This is a very superficial description of a very complicated theory here. Now, do you also get with closures agency? Okay. So what you have to imagine is that if you look at it from a dynamical process oriented point of view, your current state as an organism, depends on past states off of the environment, you react to your environment, but also on your past states and those of your ancestors. So partially at least your current state can only be accounted for through previous states that you have internally. So a lot of it depends on causes that come from within, and that’s exactly what it means to act, to have agency, right?  

Yogi    00:32:33    So true, a true concept of agency. And I’m not talking about people who want to explain it away by it’s nothing but information process sustained processing. It is not like that. It is something that, you know, you have agency when some of your actions are caused from within your own system. Um, and that can only happen if you have organizational closure. So if you have that closure, you have a certain basic, uh, agency. So I’m not talking about making decisions, conscious decisions at all, that that bacteria has this type of basic agency so that the bacteria can in a way, decide to swim up the gradient by triggering certain behaviors in a very stochastic way. And it has a very limited repertoire of actions, but still it makes we don’t even have the language to talk about this. It’s very dangerous to say it makes decisions.  

Yogi    00:33:25    It’s not thinking about, but it’s internally generated. And it has a selection of different behaviors that it can do otherwise. And it can only do that because it has, um, closure. So basically what we’ve connected here is the principle of systems that are, evolvable have to have this close lifecycle, but these systems have to have this particular type of organization automatically comes with agency, a certain kind of autonomy from the environment. And so basically agency and, uh, FO’s ability go hand in hand. So the claim and my most recent work is that you cannot even get evolvable systems that are not natural agents. And so then it becomes really important to ask what does that degree of freedom, um, entail for evolutionary theory? I mean, it has a really big consequences. Think about niche humans altering the environment instead of altering themselves. So we adopt through changing our environment, uh, completely more or less nowadays.  

Yogi    00:34:34    This may not work out very well in the end, but all these sorts of additional, um, dimensions are completely excluded. If you look at, uh, just the genetic level. And that’s something that a lot of critics of reductionist evolutionary theory are saying, but here we have a specific perspective that I call an agential perspective. So it’s not just a process perspective, but it’s a specific type of process perspective on evolution, which allows us to ask different questions to make these questions also a legitimate to ask, because there’s a big taboo about asking, you know, what does, what the organism wants? What kind of role does that play in evolution? You know, immediately dismissed as some sort of mystical, uh, teleology, but that’s not the point. That’s exactly right. It’s, it’s, it’s taking the phenomena serious and trying to come up with a scientific explanation, but this explanation may not be mechanistic for reasons that I explained in the lecture series again, because, um, these sort of behaviors are not necessarily explainable in the kinesthetic terms. So it’s not scientific. I repeat that it’s not mysterious or anything, but it’s not, it will be considered a traditional, a mechanistic explanation, which is just taking the system from state a, through some causal chain. Okay. Because it has this sort of snake bites, its own tail causal structure, which is very convoluted. It’s not obvious how to deal with it in mechanistic terms basically,  

Paul    00:36:10    But in your L your lecture series, you, um, you, you would talk a lot about complex systems and how difficult they are to understand. And that’s why a multiperspectival approach is beneficial. Um, one of the things that you talk about is near D composability from, uh, Herbert Simon, right? And, uh, and where everything is connected and everything is affecting each other, but some things are way more important and some things are less important. And I’m wondering if, if you had to speculate this agential perspective in evolution, how important would it, you know, will it end up being to the, the process of evolution or as you know, among the multiple explanations of evolution, will this, uh, will the essential perspective be just a tiny thing? Will it be the main, become the main driver, uh, of, uh, explanations for evolution?  

Yogi    00:37:02    I have no idea. The thing is we haven’t asked, we haven’t looked right. This is my problem. I’m not, so here don’t get me wrong. I’m not saying, look, this is agency is crucial for evolution. I’m just saying, I can make an argument. That systems that are evolvable have agency that coincide. So, so if you have an agent it’s evolvable, if it’s a vulnerable, it’s an agent. Um, so it’s worth looking at that. This has not been looked at these questions have not been asked, so we don’t know the answer. It may well turn out that it’s completely unimportant, but I’m not saying you can’t just say that a lot of reductionist geneticists, just say, I know it’s not important. No, macroevolution is just an extrapolation of population level frequency shifts. And that’s just made up, there is no argument scientific argument behind that. That’s just a simple extrapolation, linear extrapolation, and nobody ever has done any serious work on this. So that’s what I’m saying. It would be worth exploring, you know, in the end, you know, near the composability is nicely illustrated by astrology, right? I mean, astrologist have a point, but they say the climates do influence your relationship. Sure. But the people who you have the relationship with, they’re probably much more important for your relationships.  

Paul    00:38:26    What’s your sign, Yogi  

Yogi    00:38:29    Pisces.  

Yogi    00:38:35    Yeah. So, you know, everything is connected to everything, but a lot of this stuff is really these interactions are really not important for, for, you know, whatever questioning, if this is the basic principle of perspective or realism, is that you’re, you’re going at reality with a certain angle. And you have a certain question. And in that context, some perspectives are better than others. Some perspective is, and is not relativism. Like, you know, some post-modern thinkers say everything is just that this score is all kinds of knowledge or the same perspective. Isn’t is not that it says that for certain contexts, certain perspectives are better than others. You more sound or robust, trustworthy knowledge, but you cannot extrapolate that across all circumstances ever. You know, and that’s basically my favorite definition of a complex system is a system that will show ever new properties in new contexts, right?  

Yogi    00:39:30    So you basically can never list all the possible properties of a complex system in advance. That’s an argument that is strongly making. And so you get true radical emergence and evolution, true innovation. It’s fundamentally unpredictable, which I think is a great thing. I would really not like to live in a clockwork sort of classy and demon universe, where everything is determined. And so this is a beautiful view of a process-oriented open-ended view of the world. That is, again, going back to Alfred north Whitehead and where he called a Siri of organism, which was the view of the metaphysical system that viewed the world to the universe as more like an evolving process and evolving organism, then a mechanism. I think it’s, there’s nothing dodgy or mysterious or unscientific about that. We can have a science is compatible with that, and it would be much better to study evolution using that sort of science than the traditional mechanistic approach.  

Paul    00:40:37    You, you argue also that I’m this age, a potential, uh, perspective is in line with this. Open-endedness what you’re calling radical, uh, emergence, uh, through its interactions with the environment, because it changes the environment and that environment changes the agent. And that, and there’s this, this interaction where the adjacent possible that Stuart Kauffman, um, uh, preaches about, uh, is this dynamic ongoing open-ended process  

Yogi    00:41:05    That’s right. Preaching, but he’s, he’s got the point, right? I mean, so, yeah, I really liked the argument that for also for, uh, bill Windset is making, he has a book that’s called re-engineering philosophy for limited beings. And in the first chapter, he’s sort of taking, uh, it’s called, uh, <inaudible> missions. And basically he says a lot of the theory of knowledge and science through the ages has been made for an unlimited in demon who knows everything about the universe and its future. And its past is gone. It’s not a limited being that’s in the universe. And any limited intelligence needs a different kind of approach to epidemiology, to the theory of knowledge. And he builds that perspective or approach out of this argument. And I think this is very powerful and you end up a bit, ironically, you end up with a much more realistic, uh, theory of knowledge. Then if you, if you base it on certain sort of dreams of the final theory is a very sort of things do.  

Paul    00:42:09    Yeah. So there are so many different ways we can go here. I’m just, I’m itching to bring up the, this, the AI, uh, work that you’re working on with Stewart. But I want to ask, because you talked about how a bacterium is an agent and how we are agents, but to be an agent you don’t need any consciousness. You don’t need any awareness. Where do you see a role for brains and or minds? If you do see a role in this, you know, agential open-ended evolution perspective.  

Yogi    00:42:42    So I see the, the zone between agency, basic agency and cognition, cognitive agency, as a sort of a gradient, I think nervous systems evolved on this sort of foundation of basic agency. One reason for it is, is to enrich, uh, the repertoire of actions that you can select from. Of course you have, if you’re mobile and you have a nervous system, you’ll have a lot more choices, then next year, a bacterium or a plan. And that created its own sort of dynamic of evolution in that sense. So I think, and this is why it’s so important to build a vocabulary for agency without consciousness, because the problem is if we modeled it to, then we get to panpsychism and stuff like that. Let’s just say that I’m not sympathetic.  

Paul    00:43:40    Okay. Very good at that. Yep.  

Yogi    00:43:43    That’s all I have to say about that. So I think that’s precisely an example of where we mistake. Uh, we were used to think about phenomenon that come out of agency in the context of consciousness, because that’s how we work. Right. And so it’s really hard to abstract this way to, to, uh, organisms that don’t have consciousness or cognition. So that there’s a few, I think it’s  

Paul    00:44:12    Bizarre. And this number, perhaps  

Yogi    00:44:15    I don’t think it’s, yeah, I think it’s, uh, if anything, it’s not helpful, you know, it’s, it’s maybe not helping because it muddles, uh, discussions that we have to have about, uh, consciousness about free will. These are really hard. I, I don’t know what consciousness is. I don’t have a particular opinion about  

Paul    00:44:32    That. It’s a thing.  

Yogi    00:44:33    Of course, of course, it’s a thing I would like to understand basic agency before we move on to consciousness. And, uh, there, I’m very careful, you know, just this a personal choice, because I think the questions that we ask about consciousness, not really well posts, and I think we are now at a sort of stage in history time in history where we can start to ask, well, post questions about agency, but we lack the vocabulary. All the vocabulary have about, you know, making decisions, selecting behaviors, all of that is based on what we do as conscious beings. Of course it muddles the waters. It makes it sound more, um, hokey pokey than it is. And often you confuse people because he seemed to be making the claim that, that everything is consciousness. Definitely don’t agree with.  

Paul    00:45:30    Let’s talk about AI then, because, uh, so you and Stuart Kauffman are co-writing this manuscript, uh, where you sound the alarms about how artificial intelligence is gonna take over the world and kill us all. Is that right? Do I have  

Yogi    00:45:43    More or less? So we’re discussing at the moment. And also I want to mention, uh, Andrea Rowley, an AI researcher from Italy, who’s involved in our conversations three-way conversation. And we we’d like to have this discussion in a publication. Of course. So here’s the thing about, uh, organizational closure, this weird organization, um, it gives you agency. So basically you could formulate it in different terms. You can say it allows you to want things. Uh, again, we don’t have the right word. So a bacterium does one things in a conscious way. Like we want things, but it still goes for the food. You know what I mean? So it’s, it’s sort of a truly goal-oriented behavior. And the argument is basically simply to say that you cannot make an algorithm or an AI just translates to the question about cost, function, choice to begin with, or even to, to choose or not, whether you want to optimize the cost function or not. So the argument is that in, in AI, first of all, all agents as an AI, uh, researcher would call them are simply algorithms, input, output processing procedures, and the argument of organization, the organizational account of organisms is that organisms are exactly not like that. They are more than that. So because they can cause actions from within the system, they have a degree of freedom. Um, somebody, I forgot the name of the office of the paper called freedom from immediacy. They don’t after organisms, don’t have to respond to the environment,  

Paul    00:47:27    Golden shad Lynn, I believe  

Yogi    00:47:29    That’s right. You’re right. So freedom from immediacy. You don’t have to, it’s basically an argument about, you know, you could have done otherwise. Some sort of, again, it’s not the argument about free will free will, is something more evolved? Consciousness took the bacteria have behaved in a different way. Basically the answer, if you believe in what I would call a strong concept of agencies, yes, there are, uh, there’s a freedom there, a degree of freedom. So I think that’s very important. Now, the argument that we’re trying to make is that this degree of freedom is not algorithmic because it’s not formalized a hundred percent. And there are several arguments that have been made in the past, uh, about this, uh, one of them by Robert Gross and the first one who is often misunderstood as saying that you cannot model an organism and it’s been shown that you can make a simulation of an organism that behaves like an organism.  

Yogi    00:48:28    You have to use a recursive sort of functional programming paradigms to do this complicated. You can’t just do traditional consistency theory, but you can do it. But there was argument wasn’t about that. He said, it’s not complete. So it’s an incompleteness argument analogous to go to the incompleteness theorem and mouth. Good. We’ll show him that number series and compete doesn’t mean you can’t use numbers here, right? It just means that it doesn’t capture all possible statements about numbers. And this is the same argument. So basically you can make a model of the organism, but the organism can always surprise you because it can have, it has this degree of freedom to act in a way that it’s never done before, because it’s action right now, it’s state right now. It depends on its entire history that we’ll be sharing history in the end. And so its behavior is fundamentally unpredictable unless you know, this entire history, which is impossible.  

Yogi    00:49:25    So that’s one argument. The other argument presents it’s complimentary to that. And he says, he cannot simply cannot predict all the possible functions of a, of a complex system. So he takes his screwdriver as an example, as, as he has been designed to tighten bolts, but you can also use it for all kinds of other purposes. You can pry a door open with it. You can pick your nose. I think it’s same as well, whatever the context of its use, it’s always different and ever evolving into the future. It’s never the same. You, this is a radically complex, depending on property function, the function of the screwdriver. And so as soon as a thing, an organism across this has a function, you, you cannot predict all the possible functions anymore. And so that brings you this, this sort of Jason possible view of radical radically emergence, but you simply cannot predict the specific context before it actually happens.  

Yogi    00:50:33    There’s another beautiful book that I recommend, uh, by, uh, process philosopher, Nicholas restaurant, which is called where he makes the same argument in the context of discoveries of sciences. As if you could predict the specific discovery in the future, then you would have already made it. So there’s a logical paradox there. So specific discoveries in the future or facts that are fundamentally unknowable, they just can’t know them because if you do, then you’ve already made it. And then this is not a future discovery anymore. So I really like this sort of use. So we’re tying together these, these arguments to say, okay, so we cannot have, um, this sort of open-endedness this, this sort of surprise element in the behavior and the evolution of organisms in an algorithmic system. And since all our AI agents are algorithmic, um, they cannot actually do that. So the argument is basically, maybe you can call it a stronger in a weak sense of agency.  

Yogi    00:51:35    The weak sense of agencies is simply algorithmic information, and you can make an argument that the, what makes biological agents true agents goes beyond that. And therefore you cannot have this sort of artificial general intelligence, thanks, Skynet that suddenly wants to exterminate humanity. It’s not going to do that. Why would it want to exterminate humanity and how do you program it? It will always be limited in some way, by the way, in which you set up the, the AI system in the first place while living systems are not, um, they are also constrained, but they can break through those constraints eventually, uh, through evolution while an AI simply can’t do that because it’s algorithmic,  

Paul    00:52:21    You create, um, like someone like can Stanley, um, and you know, lots of people working on evolvable systems.  

Yogi    00:52:30    So these systems are always evolvable in a limited way. I mean, this has also, I mean, the failure of artificial life, you know, that the sort of, why, why do these evolutionary simulations always get stuck? It’s my strong suspicion that it is because these agents, again, that’s a bit of a misnomer. I always scare quote, the term in this context are not true agents. And so I have any suspect, of course, I’m biased here, but it would be interesting to look at the role of agency in evolution, because I don’t think you can get this sort of open-ended evolution without agents. And this is what I talk about when I use the term evolvability here. It’s a very specific sense, a very strong sense. So true innovations that are not just farmers making the argument. That evolution is just sampling from a huge platonic space of ideal forms, which is bizarre. And I think Castleman’s argument is directly opposite. Uh, this sort of view, I like it much better because it’s process oriented and we don’t have this pre-existing spaces, possibilities where you cannot formulate it.  

Paul    00:53:39    Is there a room for good enough AGI or good enough AI that we would be satisfied, but I don’t really even know what the goal of AGI is. So if you asked different people and they have different answers, but, um, you know, w will we be satisfied that we’ve created something good enough using something like, you know, uh, reinforcement learning algorithms, right? Where you, you are, you know, you’re still externally giving the objective function, um, as it’s, as it’s, as the agents scare quotes motivation. Right. Um, you know, a lot of, you know, a lot of people are talking about reinforcement learning, being enough to get to AGI. Uh,  

Yogi    00:54:17    Yeah. So the question is, what do you want? I say, I, it’s not clear to me. I mean, you can, you know, what is it, Eliza pass the Turing test for a few minutes, lots of people. So that’s, I think it depends on how you set the bar. And also if you want, I mean, we’re happy with annotations. So AI can produce a lot of imitations of creativity and true life. And we’re very convinced by that because it’s very good at doing that. And so the question again is why would you be wanting to do this? It’s just, I think for me, this discussion is important because if you read around in rationalist circles and all that, a lot of stem, uh, there’s a beautiful book called depressive, um, which is listing all the different, uh, existential risks to humanity right now. And then always near the top of the list is this, um, generally AI sort of replacing us, what will kill us is Facebook, you know, watch the social dilemma, fret about Skynet  

Paul    00:55:21    Is  

Yogi    00:55:24    So that’s much more immediate and much more dangerous and Skynet, don’t worry about Skynet. That’s not going to happen anytime soon. You may actually have to evolve synthetic life to get that, you know, as sort of, uh, an AI implemented in a synthetic life form to get that, you know, but now we’re talking about science fiction.  

Paul    00:55:42    Well, well, right, but that’s, yeah, I was going to ask you about that. I mean, you know, so is life a necessary, um, precursor to intelligence or do we need to reconceptualize what we mean by the thing we call intelligence? Do we, I don’t even know what the hell we’re talking about, to be honest.  

Yogi    00:56:01    Yeah, it’s very, I mean, there are arguments back and forth. Whether you could implement the principles of organization for living organism are not necessarily dependent on the material substrate. Of course, the material substrate needs to have certain characteristics and the argument has to be made by Alvera Marino and the ranchettes among other people that you need, uh, an organic substrate to get that you can implement that mechanically. It’s just not feasible, but I, I wouldn’t, uh, venture too much out on that. Lynne, you know, maybe we develop some kind of, you know, gray goo technology soon that the can do it. I don’t know what we’re arguing, but it’s Stewart. And Andrea here is that, um, you need a completely different architecture of your AI. And at the moment, uh, with our current technologies, I would say the only thing that can do it because suddenly they have living cell, a living organism.  

Paul    00:57:04    What do you think? Um, if you had to, again, speculate, you know, how far into the future do we need to go until we can artificially create it with materials and new architectures?  

Yogi    00:57:15    So that depends on what you could call strong synthetic biology, right? I mean, at the moment what we’re doing is we’re doing replaying some electrical engineering equivalent in organisms. We try to predict what the circuits that we built into the organisms do, and then most of them were wrong. So the true aim of such a synthetic biology group, of course, to, uh, synthesize, um, a living organization from scratch because, you know, these sort of publicity stunts that vendor, for example, that saying, oh, you know, we we’ve synthesized the genome and then put it in a silent, at least they cheated in two ways. First of all, the genome was based on existing genes from a host organism. And of course the satellite transplanted into, was it living, uh, saying same thing again, I shouldn’t have been living organism. So, so that’s not creating life at all.  

Yogi    00:58:10    So it would be to, to synthetically produce a, uh, organization with closure that shows all kinds of signs of life and has agency and this strong sense that I was trying to convey, uh, in my lecture and my recent work. So, uh, I think that is something that current AI certainly cannot. I mean, if we talk about what we’re doing right now, this is impossible to do, but I, I don’t want to speculate whenever you say it’s impossible to do something, technologically someone comes along and does it. So, uh, whether we will have in the near future, such artificial organizations, life-like organizations, this is a very open person. That’s a nice challenge for engineers and biologists out there, and that’s a bit Frankensteining. It has to be regulated quite a bit in my opinion, and we’d have to proceed with extreme caution, also not to release this kind of stuff into the environment.  

Paul    00:59:07    Hmm. I know you’re not a neuroscientist or a, an AI practitioner, but a lot of the people I have on the podcast use deep learning networks, which is a large part of current AI to study what’s going on in brains. And, um, you know, make a story about how, uh, the, the network and its dynamics are similar to brain dynamics. Um, do you have thoughts on that kind of approach or do you see it as fundamentally limited in the same way?  

Yogi    00:59:35    So I see, I see it as fundamentally limited, and it’s a, it’s a huge step forward to just sort of looking for certain circuits in the brain and say, this circuit does this, and this does that this circuit doesn’t do anything. You can run different processes on the same circuit. So in that sense, if you’d step forward on the other hand, it’s, again, it’s just a very fixed, uh, you know, traditional sort of dynamical systems approach that would not give you true, uh, autonomy as cognitive processes, because it’s just algorithmic it’s limited in that way. It’ll, it’ll allow you to simulate probably a lot of the aspects of cognitive processes. So I have to say, uh, and this is very important that, um, a large part of an organism, a large part of our brain works in mechanistic way. So we can get a long way by studying these systems, dynamical systems approaches.  

Yogi    01:00:30    So I think we can definitely make pragmatic progress, uh, even great progress. We can sort of approach us that they live not in the end, gives us a complete ever give us a complete picture or a very deep understanding of how brains work because they’re intrinsically limited, uh, to mechanism to algorithm. Uh, and for me, I mean, if we can show that even a simple bacterium as agency that is not algorithmic, then we don’t have to discuss whether the brain is a Turing machine anymore. You know, some of the processes that run in the brain may be a light computation and turn sentence, but the whole brain, the whole organism is why should it be captured by, by this limited technological metaphor that we’re using here? Right? I mean, it makes no sense. We have to prove that for, so I think the burden of proof that simply the people that claim that to be true, and they often say it’s evident, but it’s not. I don’t think it is. I’ve never seen a convincing argument for it.  

Paul    01:01:30    I mean, there are people like mark bickered who kind of rail against computational ism, right? The computational approach to understanding, but you have to admit that it has been one hell of a successful perspective on advancing our understanding of at least certain aspect, those aspects that you’re talking about, that you can understand in mechanistic terms,  

Yogi    01:01:49    I have colleagues in evolutionary biology who go against molecular biology and molecular reductionist approach is saying we’re against that. That’s absurd. So this is a very successful science and it’s brought us a lot of really interesting and important insights. And, uh, the trick is to realize its limitations just like computational ism in cognitive science. Right? It’s a very useful approach, but just like any perspective, if you’re a perspective, as you realize that is just one perspective you can take and it’s useful in certain contexts what’s happening in both neuroscience and the life sciences is that, um, this sort of genetic paradigms and these metaphors, like the genetic program, all of that, and then program metaphors and the neuroscience has, has been taken massively out of context and used to explain a way, you know, phenomena beyond their boundaries. So basically we inverted the argument, if it doesn’t fit their paradigm, um, then it’s not real. And this, you see in the literature about agency, uh, all over the place. So, um, this sort of, uh, trying to explain it a way rather than to explain it, because then we can save mechanistic approach instead of saving the phenomena and taking them serious. And I think that’s just upside down, if you’re a true in purses, you’re taking those phenomenon series and you’re not just dismissing them, um, because they don’t fit in your preconceived paradigm of how you should do science. Right.  

Paul    01:03:21    So one of the things that, um, Marino and Marcio talk about, um, in their book on biological autonomy. So, so thinking about your perspective on the agent as, uh, an autonomous organization, um, with, uh, organizational closure, closure of constraints, uh, they talked about this also in their book, but they also sketch out an argument and admit that it’s an incomplete argument, uh, that they believe that the, that our brains and minds have this same kind of organizational closure autonomous from our, the rest of our organism. Right. So, you know, there are different China dolls or Russian dolls of autonomy or something, right. Um, D do you buy this perspective or do you think that we need, I mean, I know that there are multiple valid perspectives, but do you, by the organizational closure of minds,  

Yogi    01:04:16    I’m not sure. I think it’s definitely not clearly distinct. This is, it’s a, it’s a different type of organization. You can, they have a really nice argument. I have to say in their book about what is really basic closure. So metabolic closure, basically just keeping yourself alive, you know, and then regulatory layer on top of that, which allows you to adapt. So they call this adaptive agency. It’s just one level above basic agency, basic agencies, just sort of having a metabolism that maintains itself. If you buy into the places perspective, also the boundaries of the organism, and then you have through those boundaries and tractions at the environment, and you get through regulation, you get an adaptive type of agency where the organism is able to react, to influence influences from the environment. And you can make a really convincing argument that in some ways, these are different layers of complexity in a living system.  

Yogi    01:05:15    So you could make a similar advice regulation, as far as I understand, they make the argument that, um, through evolution, nervous systems have, um, autonomous themselves in this way. And, uh, I do by that general argument, uh, as plausible, um, of course, it’s just, uh, it’s just a scenario at this point. It’s hard to prove that, but it’s a possible scenario, but, um, I wouldn’t call it completely. So the danger is always near the possibility means that we can distinguish different aspects of an organism without having to separate them. Right. And so this is, uh, it’s very important and is often overlooked, uh, because you don’t have to be able to separate processes to be able to distinguish them and treat them in different ways. And again, perspective is, and helps you understand that it’s a very powerful way of understanding why that is  

Paul    01:06:10    W one of the things that you repeatedly bring up in your course. And I have to admit, I have not read re-engineering philosophy for limited beings yet, but, um, I have used, uh, the, the same images inspired by, you know, just, I just basically copied you, um, um, from bill <inaudible>. Uh, and I’m specifically thinking about the one where the causal structure at our level of organization in this world and the bio psychological thicket, um, you repeatedly, you know, point to this and say, you know, you wouldn’t expect to, to be able to have a purely req reductionist, uh, explanation, and we need, this is why perspect perspectives are good, because each perspective is a cut through this, uh, causal, bio, psychological, um, thicket. And that’s a long-winded introduction to my question, which is, do you see, so what I’ve kind of been thinking about is like, how are the brain sciences brain and mind sciences in the same predicament as the biological, uh, life evolution and genetic, um, uh, sciences, or are there important differences? Because I just see so many parallels between what’s happening and what you described that’s happening in your world.  

Yogi    01:07:21    That predicament is definitely the same. It gets increasingly where we’re sending links. That makes some really nice arguments saying that, um, while if you have these perspectives, they have to cut the phenomena, the ticket in, in ways that make sense as you get to evermore complex layers in the social sciences, especially, but also cognitive science, you get bigger and bigger challenges there. Because as I said in our discussion about consciousness, we have a really big problem, and that is nobody knows what we’re talking about.  

Paul    01:07:53    You’re going to get in  

Yogi    01:07:54    Trouble. I know I just said that let the hate mail come, but I think that’s, that’s a problem. I mean, you can make some really, there are some really don’t get me wrong. There’s some really interesting arguments there, but I think it’s really hard to cut that in the right way. And I’m saying, as we slowly cut our way, um, agency only has to become these kinds of questions very recently. And it’s even possible now to maybe have an empirical study of how this organization of natural agents works, but this is also very dependent on the technology we have to kind of other knowledge, we have very context dependent itself. It’s an evolution of knowledge. So I think we can, we can cut our way, uh, through the second. One of the basic insights from, uh, arguments about incompleteness in this area is that you will never have, uh, count said you will never have a Newton, uh, of a blade of grass.  

Yogi    01:09:01    You will never have a general theory, like in physics. And of course, comp was a big cop-out teacher said, oh, you know, you should treat organisms assists. There were mechanisms that even if they’re not, we call it this Telio mechanism. And we still have thought, for example, in the writing of Stan Bennett, he has something called the intentional stance, right? He says, oh, the lessons are mechanisms, but it makes sense to treat them as if they were enough because they behave as if they were not. And then I’m wondering why, what, what are we doing here? Why don’t we take this phenomenon of agency serious? And maybe we learn something sort of just trying to explain it away that way and saying, oh, we’re just pretending here that something has agency, because it makes it easier for us to think about it, which is a consequence it’s not consistent as for me, it doesn’t work.  

Yogi    01:09:47    It’s sort of, it’s something half-assed really to be quite Frank. So, um, uh, I think, um, so what I really like about website is really hard to read, but what I really like about is that he takes this idea of having to have different level explanations, really serious, um, uh, makes a very convincing argument for that. So he has a chapter where he describes the uses of reductionism. And again, he has this very sophisticated stance where he says, it’s good for a lot of things. And I just don’t want to come across here saying, okay, we shouldn’t do this. I’m just saying we have to recognize its limitations. And what we’re doing right now is that we’re going way beyond this limitations. And it causes all kinds of problems, for example, or our arrogant attitude towards nature as controllable and predictable, um, is one very big societal consequences that sort of a failure to recognize our limitations in this sense.  

Yogi    01:10:47    So this approach is very limited and it doesn’t apply to most things that truly interest is if you think about it, agential systems are involved in all kinds of real, uh, you know, the things that really truly interested, some that are important to our survival ecosystems, the economy, social networks that are disintegrating, um, and so on and so forth. So I think the signs of agential system is just absolutely fundamental and we do not have it yet. I mean, complexity science as it is right now, it’s just an ankle assistance because it has this notion of agency it’s just confrontation and that’s just not working.  

Paul    01:11:23    Even dynamical systems, um, is hard to Intuit, right? So one of the appeals of the mechanistic approach is because it’s intuitively appealing thinking about the entities and their, their parts and activities to explain the phenomenon at hand. And when you start talking about, uh, bifurcations and dynamical landscapes and trajectories, all of a sudden it gets slippery. And then, so I don’t know, is the essential perspective going to be even worse?  

Yogi    01:11:53    Yes. It’s going to be a lot worse than that.  

Paul    01:11:58    You end your course sort of it’s it’s aspirational and you, you say, oh, I’m going to have to go learn a lot more math now, you know, because it’s, uh, because it’s, it’s a real project.  

Yogi    01:12:11    I think one of the interesting things is to explore the limitations explicitly, so to push things. And, uh, this has, for example, not been done enough in terms of taking recursive functional programming, um, um, to its limits. These are, uh, programming approaches that allow you to operate not only on, uh, you know, the state of the system or the parameters, but on the, on the very, um, uh, the structure of the system, the operators and the system itself. So you have, so basically a program that rewrites itself, and that’s already a pretty good approximation to whether an organism is, but again, it remains algorithmic. So at some point it’ll break and we’ll get stuff that happens that is not captured by this pharma, listen to the question is where this, that happened. And again, does it happen often enough for it to be relevant, uh, important for our understanding of evolution, other systems that involve patients, you know, um, cultural evolution, the economy, um, I think I, my intuition would be that this, of course plays a huge role in social systems and probably a somewhat lesser role in evolution, but I still think it plays the curly plays a really important role.  

Yogi    01:13:26    And as you say, I mean, it’s comfortable. We are used as modern people, enlightened people that go into science. We think of anything that is not mechanistic is not scientific. I mean, I want to come back to that point and that’s just not right. You know, I mean, there, there is no reason. Um, my friend, uh, philosopher pennis falls, she’s, she’s making a really convincing argument that certain types of teleological explanations are completely scientific and are okay, but only certain types. For example, the teleological explanation that an organism acts in a certain way, because it wants something, he has a very strong argument that this doesn’t violate any of the sort of claims against the logical explanation that he has causation from the future, et cetera, et cetera. That’s not a problem, but of course, other people trying to convince us that evolution has an ultimate goal or something like that, it’s completely legitimate so that the line cannot simply be drawn that mechanism and then say anything.  

Yogi    01:14:23    That’s not a mechanistic explanation. It’s not scientific, that’s taking a very narrow, um, uh, stance on what a scientific explanation is. And that’s a pretty recent thing, you know, uh, going back, uh, at the most of the scientific revolution, because views of the world very much richer in terms of what kind of question answers to the question, why you can, you can provide, has to provide them the, and I think we’ve lost something there, you know, and, and if we don’t get past that limitation and recognize it in the first place, that’s, that’s the thing. I mean, we don’t meet, we are so embedded in it that we don’t see patients anymore. It’s like, you know, that’s the fish have knowledge of the water around it. You know, it’s like, it’s like the water we’re swimming in and we’ve completely forgotten about, um, that we’ve actually constructed this approach to the world pretty recently. And it was so successful that we’ve just completely forgotten all the other stuff that be thrown out, um, um, to make it work in the first place. And it’s time to get back to that because a lot of the problems we have right now are in understanding our situation in the world and then understanding truly complex systems that have agents in them is, uh, has to do with these philosophical questions that we’ve been discussing. And of course, uh, neurosciences are completely pivoted in that,  

Paul    01:15:46    Uh, this cry for theory, that we have so much data and we don’t know what to do with it. And we need theory. This is prevalent in the neurosciences, in the sciences of the mind, but watching your course. And is it just a Mirage, uh, due to your scholarship that you bring in so much, uh, biological theory that it looks to be a golden age of theory in, uh, the biological sciences? Is that a Mirage because you highlight so much, so many efforts,  

Yogi    01:16:19    So here’s, again, the real world. It’s a beautiful picture. I think there’s lots of good theory to be produced in neuroscience and life sciences at the moment. The problem is that, um, uh, so I think a lot of what, again, the Scientifics, the system is set up that you have to sort of shout to the world. Here I am, here I am, it’s me, it’s me, it’s me all the time. And so people produce a theory that is self-serving at the moment, it’s not targeted at a deeper understanding of a phenomenon, but just that the self-promotion. And I think, uh, our fields are flooded with this type of second theory, which gives theory again about names. So we have, uh, this sort of self-serving, uh, I would call it shallow theory, a technical term used by philosophy. Harry Frankfurt is bullshit. Um, that is increasingly prevalent for the reason that you have to produce it to be seen and heard at the moment.  

Yogi    01:17:15    And I think that’s a real problem. So we’re, we’re sort of flooded by this and this feels certainly be counterproductive. I’m, I’m gearing up to write a paper about the pernicious role of shadow theory, uh, which is, first of all, everything about theories theory gets a bad reputation, but the other creates an illusion of understanding where there is. And I have to say that some aspects of the discussions about consciousness are of that type, but also theory in evolutionary biology are competing and missing the target. And again, as I said before, if you have concepts, you have to analyze them and see what kind of work they do. And often we introduce new concepts and so-called frameworks that, you know, work at the moment. And so I think that’s a real problem because it makes it really hard for people to recognize the important questions. Um, again, Windset has beautiful, um, uh, arguments about this, how we really don’t realize often that we’re talking past each other, or just shouting past each other, uh, talks about these pseudo debates that happen, especially in that second causal, second rare. It’s really hard to see the forest for the trees. You know, that’s, that’s the nature of the game. It’s, it’s hard to do biology. It’s hard to do neuroscience. It’s definitely hard to do social science.  

Paul    01:18:38    Well, it’s hard to keep, keep a career as well. And that’s a motivator, I think,  

Yogi    01:18:43    Right? So we cut, we cut through, uh, you know, we, we just raised the ticket, completely burn it down, and that’s not a good way to go about.  

Paul    01:18:54    Well, speaking of theory, you were about to start talking about something I definitely wanted to talk about because one of the things that your course has done and you know, my other reading as well, I don’t want to give you all the credit of course, but, uh, is the idea that synthesis is not the goal. So, you know, growing up scientifically synthesis is always the goal you need to synthesize. Uh, and that’s how you understand that’s how you explain things. But one of the things that a perspectival approach leaves room for is, is that synthesis is not necessarily the end all goal. And in the course, you talk about the modern synthesis and the extended evolutionary synthesis and these attempts, uh, to quote unquote synthesize and, and come up with a grand unified theory of evolution. And there are a lot of people talking about grand unified theories and neuroscience and the brain, et cetera. And there’s a lot of plenty of pushback too. And I, you know, you would be among those pushing back on the idea of meeting a grand unified theory in this causal thicket. So, uh, I just wanted to cue you and ask you why is synthesis so bad? Yogi  

Yogi    01:20:07    Here, you have this process evolution that creates its only function. I would say it’s dangerous work is to create diversity. And we have some unifying principles like natural selection. Although we know even population geneticists know very well that a lot of evolutionary process don’t involve natural selection, but you have this sort of principle, but think about the complimentary aspect of evolution processes that create new phenotypes, sometimes entirely new levels of organization. And these are called major transitions of evolution, eukaryotic cells. And of course the evolution of consciousness in the end, we always end up can be seen as something like a new level of organization, a new degree of freedom, as we said before, I think that makes a lot of sense. So you wonder how, how can you have a general account of those processes that create, uh, what is actually being selected? And I don’t think we can because they are post-its of this nature that is constantly re-inventing remaking itself.  

Yogi    01:21:12    So they, whenever you have these, think you have the final theory, it’ll surprise you again. So for that reason it’s bizarre. So the modern synthesis, it has been argued very convincing by different people. My friend and colleague Arlin stalls, who’s others and Ron Amundsen’s philosopher of science that the synthesis was really more of a restriction. It was a form of scientific gate keeping, which was very important at the time to define a field that was new at the time. And it excluded more that it, it synthesized, um, to be honest, because all these aspects, these constructive aspects were excluded and uh, this movement, the extended synthesis is, is rightly claiming. So they’re, they’re going in the right direction there that they’re claiming these sort of neglected aspects of these constructive aspects of evolution as evolutionary biology, the mistake is or where there’s just thought it’s just a slogan.  

Yogi    01:22:13    Really it’s the synthesis part because there is no synthesis. The striking thing about this extended synthesis is it’s just a bunch of disconnected, um, uh, phenomena that niche construction, uh, phenotypic plasticity are thrown together. You see these diagrams and slicer where these concepts are somehow, you know, uh, put in circles around each other, but that makes no sense. It’s just a bunch of things that are partially already part of traditional evolutionary theory. If you go back to Darvin each construction wasn’t Arbonne’s books knew that earthworms makers, that is their habitat. He has an illustration on that in a book. And so it’s like, and then it’s sort of vague, right? I mean sort of throwing around, um, the concepts, uh, but if you do theory, the Siri needs to do work, you know, so I’m not saying there are lots of excellent people in this movement that do excellent empirical and theoretical work on their specific problems that they’re working on.  

Yogi    01:23:14    I’m saying the framework and this general idea of that there is a well definable modern synthesis that needs to be extended is, um, well, can I call it bullshit? And I think it’s just a tool, a political tool, gatekeeping tool, a tribal tool, and people are having, this is a fierce debate between if you’re between those camps, if you’re neither on the anti side or on the pro. Yes, I do feel it. I can tell you, especially if you’re a young I’m tenured researcher and it’s a pseudo debate that brings the field nowhere. And that’s sitting in that track between those two for a while now, and it’s not pleasant, but it’s very important work to do, to say, look, I mean, this is a soccer game where I want both teams to lose, you know, uh, so it’s, it’s like, I don’t think this is a productive debate because I don’t think that the conceptual frameworks that are presented are anything else, but, um, tools for politics, uh, academic politics, theoretical tools for insight.  

Yogi    01:24:20    And, um, I think we need to talk about this, but, um, it’s very difficult because you immediately get shouted at, and, and that’s not a healthy sort of, um, way, you know, I’m, I’m a bit provocative in this, of course I don’t mind getting shouted at, but I do think that we need a productive discussion where people try to understand the actual problems. And I think the underlying problem is exactly that the synthetic approach is completely wrong in a, in an area where you’re looking at a thicket causal thicket plus the thicket that is creating constant novelty, um, estimate you’re of the game. You just need a perspectival approach. And this has been recognized by a few people, of course. I mean, again, the go-to for that. Um, and he actually did his PhD as a philosopher in the lab. Uh, Richard Wilkinson, very good evolutionary biologist who often named as one of the core modern synthesis proponents, who is one of the best thinkers about, uh, dialectics in evolution.  

Yogi    01:25:20    And it’s often maligned because he was a Marxist. And so it’s not like his theory is communist. It’s actually dialectics, but I like his work, but people distanced themselves because of that. I think these kind of ideologies that intrude at the moment are a sign of a high pressure environment or the level of funding, lots of infighting more than they are of a productive, theoretical discussion. That’s basically summing it up. And I think it’s time to move beyond synthesis because a historian of science has made a really convincing argument that synthesis it’s a sort of a positivist remnant. It’s a remnant of an old philosophy of science. So everything is sort of the aim of science is producing large scale theories from which we deduce everything else. And that’s just not going to work in modern biology because we’re dealing with a novel degenerating processes, evolution and cultural evolution economy. That’s the same. So these fields need a completely different perspective Bible approach because we’re reviewing the truly complex systems.  

Paul    01:26:34    Then in the last few moments here, I just want to ask you zoom out and ask you some kind of career questions. If, if you’ll indulge me. Um, one, I, I, you know, your path has been unique as everyone, everyone says that their own trajectory is unique, but you know, I look at yours and it truly is if you had to go back, if you, if you thought back, would you do anything differently? Um, would you tell yourself anything or try to convince your younger self to do anything differently?  

Yogi    01:27:04    Yes. I mean, the one thing I wouldn’t do is sort of worry about things that are five years in the future, because anything that I worried about five years in the future never happened. And I was always somewhere completely different than I had in those five years. So this is what would be advice to young scientists. The other thing is, is if you go into science right now, I think I’ve managed to do this to a certain degree, but not enough is really do when you care about you’re passionate about. And I often managed to do this. There were a few occasions where, um, I mean there, there’s sort of two things. One is I wouldn’t work on certain topics anymore just because I think they, they, these happen very early in my career where I, I, I just thought, okay, this is going to be good for my career.  

Yogi    01:27:53    And I wouldn’t do that anymore, but I didn’t do that. I pretty early on decided not to do that. The second thing is don’t, don’t work for people who have the wrong attitudes towards doing science and are, uh, sort of bad, uh, instructors, mentors, um, sometimes just sort of have a hidden agenda or something or playing games, academic games, um, and that’s becoming more and more pervasive. Uh, this is this academic politics is a game that I’ve decided not to play. And this is also why I’m out of, uh, academic traditional academic career path. I don’t even want to get back into that even if I would still get a job at an academic institution. That’s my life’s too short.  

Paul    01:28:38    What if you could be the head of a department though, even worse? Wouldn’t it?  

Yogi    01:28:43    No, no, no, no. The head of an Institute, it’s this beautiful as that work was, it was very good. It was also for a limited amount of time. Fortunately, I have to say it wouldn’t have been good for me. I am an Explorer. I want to do academic research and I’ve read this in the times higher education supplement. There was this article that was entitled. If you like research academia may not be for you because it was a survey of such scientists. And they said on average, they spend 11% of the time on research. The article basically said, if you do see our ethical research, why not get a 50% job somewhere? So I’m trying to get a business model to work where I teach courses. I do retreats about the academic system for young scientists. And I I’m trying to start something called mentoring for people, this sort of dimension of learning that we often neglect personal growth in various directions, um, that is never assessed in those metrics that we have. It’s just factual knowledge and your usefulness to sit in an economic system that is not right. So it’s sort of, I’m trying to earn money with that sort of a community on a freelance basis, and then use the rest of the time for research instead of going through that, um, head of department, head of department. So has always, I worked under so many head of departments and it’s been a nightmare. I saw my future in that it was like, oh my God, no, I don’t want to do that. No, thank you.  

Paul    01:30:15    Yeah. Hopefully the heads of departments aren’t listening right now, um,  

Yogi    01:30:19    I admired them and they picked to them, a lot of them do a really good job, but I’m not cut out to do this. That’s not a judgment. I’m not saying they’re all bad or whatever. It’s just, no, I don’t want to do that.  

Paul    01:30:31    Don’t worry. I’ll edit out the admiration comments. So my last question, so one of the, one of the reasons that I got out of academia is because one of the reasons is that I felt like I was becoming more and more specialized and my skills were becoming deeper. Of course. Uh, and I was learning a lot more in that respect, but I was, you know, losing the forest for the trees. For instance, there is important work that was just adjacent to what I was doing that I wasn’t even really aware of, or didn’t even appreciate because I was so focused on what I was doing. And it seems like your, um, path has been of the opposite direction where you either have fought to maintain a broad picture and think about what’s important or spent extra time doing that, or somehow magically, because I know you’ve done, you’ve done a lot of hard deep work, uh, modeling, uh, these systems that we’ve talked a little bit about today. Um, but has that been, um, can you paint that picture for me? Has that been a struggle because it’s, it’s kind of going against the grain. You know,  

Yogi    01:31:38    I arrived at this point after 20 years of studying these genes and other flies called the gap jeans. I never, I never want to hear  

Paul    01:31:51    Sure, but your, but your favorite organism is Drosophila. Melanogaster right.  

Yogi    01:31:57    They were really, I mean, this was a very rewarding work. So I just decided, I knew my strengths was in theory very early on in my career, but I decided to work in the lab and then became a group leader exactly. For the reason that I wanted to have an empirical contribution and not just sit around and have theories about other people’s work. I think that that’s been paying off really well, always with the salt on the background. So I was privileged to do this, this crazy master’s a master’s degree in holistic science. It was amazingly a path changing for me and life changing because it focused me. I mean, I was busy. I went to Brian because I had read his book, how to leper change the spots at the time as a student. And at those times at that time, they were trying to see influential and a bunch of other people who worked in the complexity sciences and these kinds of people that I really wanted to keep that in mind.  

Yogi    01:33:03    And, and then I got that priming there during that one year masters course, uh, of really thinking about, uh, phenomenology master process thinking and all that really started. And I think I never, I never lost it. I keep it through also a very decade long collaboration with my friend. Who’s a biomass process thinker, and we always kept on doing that and published these papers and relatively obscure journals. But when people ask me, what are the most important papers that you’ve written my often put those papers first because they were intellectually the most, uh, sort of the guiding, you know, we did some really nice empirical stuff from the people who were involved in doing that work. I cannot tell you how great they were and how rewarding it was to work with such people. Um, but in the end, the big picture stuff, uh, is always what I wanted to focus on. It’s very difficult to do that. Um, so you have to do your evening work, um, during your normal scientific career, you have to compromise to survive if you want to survive, or then you have to decide that you, uh, will do research outside the traditional academic system, which is something that we’ve become increasingly. We have to find ways to do that still, but it will become increasingly the way to go for conceptually innovation because the academic system, I’m sorry, is not cutting it anymore.  

Paul    01:34:28    That’s quite a place to end it. All right, Yogi. So, uh, I, I was excited. I’ve been excited to have this conversation with you for a long time because I, I love the course and I’m so glad that you, uh, agreed to come on and talk with me. It’s been a real joy. So, uh, I’m, I’m really glad to introduce you to my podcast audience out there. You  

Yogi    01:34:49    It’s been absolutely fantastic to talk to you. Thank you.  

0:00 – Intro
4:10 – Yogi’s background
11:00 – Beyond Networks – limits of dynamical systems models
16:53 – Kevin Mitchell question
20:12 – Process metaphysics
26:13 – Agency in evolution
40:37 – Agent-environment interaction, open-endedness
45:30 – AI and agency
55:40 – Life and intelligence
59:08 – Deep learning and neuroscience
1:03:21 – Mental autonomy
1:06:10 – William Wimsatt’s biopsychological thicket
1:11:23 – Limtiations of mechanistic dynamic explanation
1:18:53 – Synthesis versus multi-perspectivism
1:30:31 – Specialization versus generalization