Brain Inspired
Brain Inspired
BI 077 David and John Krakauer: Part 1
Loading
/

David, John, and I discuss the role of complexity science in the study of intelligence. In this first part, we talk about complexity itself, its role in neuroscience, emergence and levels of explanation, understanding, epistemology and ontology, and really quite a bit more.

Notes:

Transcript

David    00:00:01    I care that we are honest about what we don’t understand and, and the limits of our methods and the limits of our frameworks. That to me, is what’s important.  

John    00:00:11    I’ve come to the conclusion. There are two things that you cannot avoid. If you are thinking about science, one is you have to think philosophically and the other one is you’re going to have to deal with complexity and its broadest conception. I don’t think you can escape either of those ways of thinking that go along with the more traditional notions of what a scientist does.  

David    00:00:37    I just want to say, I don’t think mind emerges from brain mind. Emergently is engineered by an environment. And that’s the thing that I’ve always found missing in the mind brain is the third part, which is, I think it’s pointless to talk about mind without talking about environment.  

John    00:00:57    There seems to be a trade-off between collecting data versus actually slowing down and having a thing.  

Speaker 3    00:01:10    This is brain inspired.  

Paul    00:01:24    Hey everyone, I’m Paul Middlebrooks and those were the voices of two brothers David and John Krakauer. I’ve had John on before. Uh, he runs his brain learning animation and movement lab, his blam lab at Johns Hopkins where their work circles around motor learning and memory, uh, learning motor skills and recovering motor skills after brain injuries. And David is the president of the Santa Fe Institute, as FFI where they use complexity science to study all sorts of things. David himself has a wide range of interests that maybe all spring from a central interest in the evolutionary history of intelligence and information processing at various biological levels from molecules to cells, to collective groups, to societies and beyond. And you can hear David often on a SFS podcast, which is called, uh, simply complexity. So this is one of those episodes where I let it run a bit wild and take its own course.  

Paul    00:02:25    Although there are a few main themes as we talk. Those main themes are complexity science itself and its role in neuroscience. How to think about emergence and what are the right levels of explanation, especially in hierarchical systems, like the kind we’re interested in regarding intelligence and cognitive phenomena, um, cognitive phenomena from, you know, the simpler like reflexes or, uh, making eye movement decisions, uh, up to the highest order like awareness consciousness, projecting your astral body, things of things of that nature. Um, we talk about understanding, and this is really just scratching the surface. It’s fruitless for me to list everything that we talked about, but maybe the best way to characterize the conversation is that it’s about how to think about whatever you’re working on. We spoke for a long time. So I split the episode into this first half is pretty broad and lays the groundwork and the second half, which I’ll publish in a few days really heats up.  

Paul    00:03:29    And we talk more specifically about brains and minds and what role complexity, science thinking, uh, conserve moving forward. I linked to David and John’s information in the show notes as well as a few of the resources mentioned throughout at brain inspired.co/podcast/ 77. If you value this podcast and you want to support it and hear the full versions of all the episodes and occasional separate bonus episodes, you can do that for next to nothing through Patrion, go to brain inspired.co and click the red Patrion button there. This was especially fun for me. Uh, I have an older brother and John is David’s older brother, but my brother and I usually co cogitate, uh, about things like the sounds and smells of our respective children. You know, the latest, embarrassing mistake either of us has made, and there’s never a shortage there and things of that nature. So it was fun to witness David and John go back and forth on these topics of intellectual exploration. And in many ways, as I expected, I sort of just pressed go on the Krakauer machine, uh, and just tried to keep up anyway. I hope that you enjoy it and it makes your world better as it did mine.  

Paul    00:04:44    David, I’ve heard you, uh, talk about your conception of intelligence and stupidity. Um, and it, I don’t know if you want to briefly talk about what those are, but you also talk a lot about, about entropy, but when I hear you talk about intelligence too, and stupidity it maps directly on to what I’ve heard you say about increasing and decreasing entropy, is there a direct mapping between those two?  

David    00:05:12    There’s a relationship in the sense that, um, methods of information theory that deal with concepts like entropy are useful in understanding intelligence and brains and so forth. So I don’t use it in the sense that say my colleague, Sean Carroll would write in terms of the, oh, they might be related, but in terms of the, um, entropy in the universe, I use it quite operationally in terms of the reduction of uncertainty between a signaler and a receiver. And so that’s the common sense, I think,  

Paul    00:05:47    But from, from what I understand an intelligent process for you does decrease entropy, whereas a stupid process inherently by it’s by your of that seems to increase entropy. Do I have that mapping?  

David    00:06:00    Yeah. I mean to these tenders, uh, when someone is teaching you something, um, you come to an efficient means of arriving at the right answer, right, as opposed to a whole bunch of spirit answers. And at that scale, you could say the is reduced. Um, so very operationally, uh, but I don’t, I wouldn’t overstate. There’s much more to intelligence and stupidity than, than just a very simple information theoretic measure like entropy. It’s just a part of it. It’s just a bit of arithmetic that you use, uh, to get at the questions.  

Paul    00:06:33    Very good. And by the way, welcome, uh, David and John to the podcast. John, welcome back. And David, thank you. The brothers Krakauer being here together. Uh, this is a, uh, w what a day, what a, what an episode already.  

David    00:06:45    It’s a horror, it’s a horror movie. It’s a horror cost.  

Paul    00:06:49    I’ll have the background music going the soundtrack. So guys let’s talk about complexity first. Um, I, I just actually had Olaf Sporns on the show and he, you know, is a network neuroscientist. Um, and he has done a lot for introducing well complexity into neuroscience, via networks. Um, and I, I think that neuroscientists can, uh, be hesitant about embracing complexity in their research, um, because you’re already dealing with, you know, a, uh, a difficult and challenging area of study, and then you think, oh yeah, it’s complexity. And then you sort of approach it and you think, oh shit. And then because you end up studying complexity itself, it seems like no matter what. And then, and then you’re out of the realm of actually studying brains, or you could end up that way. Uh, there are a few different ways to think about complexity and I’ve David I’ve heard, you mentioned, you know, kind of slice it in multiple different ways.  

Paul    00:07:46    One is that complexity is networked adaptive systems. And so there are two words there. The network is one of them and I had Olof on and we talk all about that and adaptive is another one. So it strikes me that all companies, all complex systems are both networks and adaptive by this definition here. And it strikes me there are networks, uh, that are not complex because they’re not adaptive. So for instance, a spiderweb or something, I don’t know if you’d call that adaptive, but you could consider every cross a node for instance, and that’s not adaptive. Are there adaptive systems that are not complex though, are their adaptive systems that aren’t network? You wouldn’t consider a network. And by the way, both of you just jumped in because these are all questions for both of you.  

David    00:08:35    Yeah. So, um, possibly, I mean, you could, someone might claim a thermostat was adaptive, but I don’t think we treat it as a complex system. I mean, there’s a way of making this really simple, right? And that’s just that many of us have felt dissatisfaction right? With the methods and the frameworks that people have been using to deal with a certain class of phenomena in the universe and that class of phenomena we call living. Right? I mean, we don’t get into a big discussion about what life is. And complexity science basically says that across many of these different living domains, and by the way, we would include the economy in that. That’s what makes us maybe a little different, there are common principles and there are useful methods that can be applied to all of them. So I think there’s just the straightforward way of talking about it.  

David    00:09:30    That’s not mystical. And it’s very unfair. I think when people who are disciplinary say, what’s complexity, because if you ask a physicist by the way, what’s physics, and you should try this because I have, I always do this when, you know, cause you don’t want to do it because they can’t. Yeah. And the best answer you get is we do back of the envelope calculations, which tells you absolutely nothing. Uh, if you ask a biologist, what is biology? They’ll say, well, that’s, it’s the study of living systems. And then you ask them what life is and then it all goes horribly wrong. So all fields, right, have this problem of having an essential quality, uh, and then defined in terms of broadly of the domains that they studying and the methods that they use. So  

Paul    00:10:13    Do people really, do you think that people have the notion that there’s something mystical about complexity? Is that the general, uh, response you get is  

David    00:10:22    Well it’s worth getting at the history? I think part of the problem is that, um, very broadly speaking, there are two schools. One of them is interested in determinism and deduction and pattern, and it really goes back to Alan Turing in the 1930s. Okay. And that has its chronology through  lamb and all the way up to John Conway who just passed away. And most recently, of course, people like Steven warfarin and they’re interested in patterns and simple rules that produce them. And that’s one school. It doesn’t have much to do with the adaptive world, right? It’s not really natural science. It’s a kind of a logic that uses computation in mathematics. There’s another school, which is very much what the Santa Fe Institute is. And it’s interested in randomness empiricism, more interested in induction and universality. And for that reason, we were interested in economies and history and brains and genomes, right from the beginning.  

David    00:11:30    And those three elements are critical in particular, how laws, natural laws or contingent laws evolved regularities, exploited, randomness to create function. Uh, that’s why information theory is important. And that’s what we do. We’re looking for these constraints, if you like, or physical principles through which noise is filtered to produce things which have regularities that might be universal, meaning that we’ll find patterns in the brain that resemble patterns in culture. Um, and that’s our style of complexity science. And in the end, it’s only as good as the work that’s done done. Right. In other words, it’s a very high level specification and we’ll get into it of course today. But I, you mentioned Olaf is very good example. Olaf is in more of the applied side, of course, um, uses network theory to study the brain is extremely useful and insightful and no one calls that mystical and maybe weird if they did. So it comes down in the end to where the rubber hits the road. And that I think would demonstrate that there’s an, a mysticism involved.  

Paul    00:12:38    I mean, that was an interesting thing because I hadn’t thought of the mysticism mysticism aspect of people sort of being wary of it. It sounds like people are wary of it if, if they come in thinking it’s mystical and maybe it’s because it is, it’s still new, it’s complexity. Science is even, is newer than neuroscience, which I thought was interesting. The word at least.  

David    00:12:59    Yeah. Well, I mean, again, in that second tradition of randomness in induction and universality, you have people like Claude Shannon in the forties and Vienna and others and Mary of course, and Phil Anderson and our community Manford Eigen who were not considered mystical because many of them won Nobel prizes. So I’m not quite sure where that comes from, but I think what makes it in the eyes of many people suspect is first of all, most people hate ideas, philosophy and theory. Uh, they’re just, if it’s not factual in the most, you know, mind numbing the obvious way they’re suspicious of it. And the other is that they think it’s a theory of everything. And there have been people in my community who have made that mistake actually. Uh, and I think to be honest, it’s more of a crime in the first school, the sort of deterministic deductive school that looks for metaphorical correspondences in patterns. You know, this looks like that. Therefore I’ve explained it as opposed to this more, um, inductive school.  

Paul    00:14:04    I kind of get the sense when I, you know, watch you give a talk or read some of your works and I’m probably way off base. Is there a sense at the Santa Fe Institute of almost pride in being an underdog?  

David    00:14:20    I probably, I we’re all a bit childish. So maybe I suspect that if you don’t have pride in being an underdog, then you shouldn’t be assigned  

Paul    00:14:31    Well, what happens when you, when complexity science comes to dominate and gain the respect of all of the sciences that it should have already gained the respect of, and it already has.  

David    00:14:40    Yes. Fair enough. I don’t think it’s complexity science per se, but that’s another issue which is, and John should jump in here. There are personality traits that are correlated with news.  

Paul    00:14:54    I mean, Mavericks is one entire section of one of the recent books of a collection of SFI people.  

David    00:15:01    Yeah, no, that’s, you’re absolutely right. You’re right. And I think that I do believe it’s true that there are personality types that are drawn to novel frameworks and they’re not made uncomfortable by them. There are many other people who are equally good scientists who are satisfied by recapitulating. Uh, what has gone before and perhaps come up with discoveries that have huge depth. I think that there is a personality aspect of science. There’s no point denying it. People were obsessed with Albert Einstein’s hairstyle. Uh, they were obsessed with Richard. Feynman’s brushing his teeth with his own urine. Uh, we shouldn’t deny it. It just shouldn’t be confused with the quality of the work.  

Paul    00:15:46    John, I don’t remember if this is, I don’t know if you told me this last time you were on or if this was offline, but I think that you mentioned that David finally sort of pulled you in and convinced you that, that what you do is complexity science, or you should, I don’t know if he drew you in further and convinced you that what you do is complexity science or, uh, convinced you that you should come more toward the complexity science side of things. Maybe you can untangle that.  

John    00:16:16    Yeah. I mean, I definitely think that there are moments where I find myself my own work. We discovering for myself things that I could have just gotten a quick update about if I’d spoken to David in the first place. In other words, I think he sort of kindly allows me to think that I’m thinking new thoughts when in fact all that’s happened is that I’m beginning to see the light that has always shown from the top of that hill in Santa Fe. And also I think David, I’m trying to remember now when he talked about what complexity is, it’s really about hierarchical systems levels of explanation course greening. In other words, it’s not just about defining complexity. It’s about recognizing that there are multiple disciplines and that there’s something about the structure of knowledge. These are the ontology of knowledge that has to be subject to a way of thinking about it.  

John    00:17:11    And complexity science covers in a way, a hassle Chan who I spoke to about before in the philosophy of science, where he talks about going after those things, that scientists in their everyday work or conceptual schemers pass by proxy size, I think is the equivalent of that in so much that it addresses things that people have an inkling about. They give it upside grants, but don’t really want to have to tackle directly. And I think if you become about your subject as a scientist, I think I’ve come to the conclusion. There are two things that you cannot avoid. If you are thinking about science. One is you have to think philosophically and the other one is you’re going to have to deal with complexity and its broadest conception. I don’t think you can escape either of those ways of thinking that go along with the more traditional notions of what a scientist does. Does that make sense?  

David    00:18:20    Yes. I think again, I mean, it’s really important. And Paul asked that question about entropy at the beginning, put serious. When I say the complexity is as broad a church as physics, right? In other words, it shouldn’t be confounded with a method. And I think John’s absolutely right. It’s if we think about adaptive systems carefully, right? From genetics up to economies, which we do, that is what we do. Then there are common principles and moreover you have to be able to defend and explain why it’s defensible to be an economist, right. In other words, w and this we’ll get to it when we discuss emergence. In other words, we think it is not only acceptable, but correct to have multiple levels of description and explanation and understanding they might not align by the way with the current disciplines. And that’s a very interesting fact. I mean, we might have to reconfigure that space. It might be that there’s only two things or their agenda things up, but, but John’s rage. It’s, it’s, it’s an approach to understand this domain of matter, that exhibits purpose at multiple scales and an exploration of the kinds of ideas that work best at each of those scales. And so it’s a very broad church. I think that’s very important. And it’s,  

Paul    00:19:38    Well, it’s basically agnostic with respect to methods, correct?  

David    00:19:43    Well, yes and no, I it’s interesting. I did another interview on this and that. Yes and no. I mean, where we get a little upset, I mean, look, we’re having this conversation now in a period of huge trauma in this country. And, uh, many of my colleagues were involved in writing down mathematical models for epidemics. And I did an interview yesterday where we were talking about what’s missing from those models and those models by the way, are crap. If you’re trying to explain what’s happening to African-Americans and native Americans. So, uh, it does match up what methods you use. And I think a lot of these mathematical formalisms have been so beholden to the fantasy of parsimony that have been inherited from physics that they failed to address the complexity that we, or with our eyes open. See, so we’re not agnostic about methods. We think it matters which ones you pick and that they are true to the phenomena under investigation  

Paul    00:20:41    When you get a well-rounded knowledge base in complex systems, because I think it takes a broad spectrum of knowledge to really, I mean, I, I certainly don’t have a good grasp of the entirety of what complexity is because it touches so many different realms. Uh, and it does cover many different methods if it’s not agnostic to methods at at least, uh, has a broad swath of methods that it can employ it’s agnostic regarding which, which is the correct, whether there is a single correct method it’s, uh, purpose is to correct pick the correct method, given the problem, for instance, perhaps putting words in your mouth, but when you get this well-rounded knowledge that is necessary in complex systems, um, does it make, you know, does it sort of transfer it? Does it make, can you, can you hop around between complex systems and understand them using the same approach more easily once you have a broad base in that knowledge? Does that make sense? It does.  

David    00:21:38    Again, I want to say something quite subversive. I mean, I believe I have the sort of same attitude towards complexity as I do to dentistry, right. That I wish that teeth were all healthy enough, that dentists could go away. And I wish that we thought thoughtfully enough about the living world, that complexity science cook. I don’t care about an area of science in the slightest. I care that we are honest about what we don’t understand and, and the limits of our methods and the limits of our frameworks that actually is what’s important much more than anything else. I, yeah, I, I, there’s too much emphasis, I think on this sort of what the last thing I would like to see happen is complexity become disciplinary. I think that the Institute itself, for example, to the extent that it represents that world has to be constantly new teaching into some hopeful monster that can address difficult problems of the future. I’m much more comfortable with the X-Men model of science.  

Paul    00:22:38    I mean, there are common principles to complex systems, uh, and I’m wondering if, so, we’re going to talk about its relation to neuroscience and the brain and the mind in just a moment. And I found myself wondering if there are known principles, um, from the study study of various complex systems that have, you know, transferred from one complex system to another such that you realize in the new complex system that you’re beginning to study, you see where the actual holes are and where some principles from this previous complex system that you are, that is already well-studied where some of those principles are supposed to fill in, in these holes and be able to then predict, uh, what you might empirically find in the new discipline you’re trying to study. That was a big bag of words. So I apologize,  

David    00:23:31    John, do you want to try that on first? And then I can jump in, I,  

John    00:23:34    I don’t think I can be authoritative enough about another area outside of neuroscience to know whether it’s, it will lead to some savings in the way we apply it to neuroscience, because we’ve learned it in others. I have to believe that when it comes to physics, you know, whether it’s, you know, condensed matter physics, solid state physics, and what’s been learned about, you know, hierarchal systems and emergence and physics, I must believe that though, that the physicists could be of huge value to us when it comes to thinking about these things. So who fill out this and he just died and the way he thought about the disciplines and the way he used examples from, from physics and super conductance, I find it extremely informative and other words, effective theories in physics, you know, and how they can be hived off to some degree from their hierarchical position. And those are all hugely valuable ideas that I think neuroscience would benefit from. But I certainly have, so all of that emergence, what is it called? The discipline there’s disciplinary fragmentation that is ontological, that he, you know, Anderson talks about psychology and economics and biology, and he believes are the real disciplines. Right. He has it in his paper, right. So we must learn from the physicists. I would say I, but, uh, maybe I don’t know, David has to say about that, but I,  

David    00:25:02    Well, I, I mean, I could, I wouldn’t give the physicist too much credit, so I feel that they’ll just take it into Hein. Well, look, I mean, I, I’m something a mutant, I surrounded by this. I feel that, um, there’s another way of thinking about it, right? That nonlinear dynamics, right? It’s it’s, you can’t really do any modern science without doing some nonlinear dynamics. You can’t really do modern science without doing information theory. And nowadays it seems that you can’t really do neuroscience without talking about circuits or networks and et cetera. And it goes on and on and on. And of course we’ve been working on dynamics on networks, you know, nonlinear dynamics since the beginning. And it’s, so it, in a way there’s this natural diffusion of more advanced, I’d say methods that partly a restricted to the Institute, quite quite the opposite. And, and you just look at the history of neuroscience.  

David    00:26:05    There is a history that, and John, Chris knows it much better than I do. And you do too for, but certainly if you think about McCullough and pits and Von Neumann and John Heartfield, and there’ll be a lick. And most recently people like David Munford and, and there’s this obvious foment in new techniques and methods that they actually describe as complexity themselves, quite interestingly. Um, so it’s hard to imagine any field evolving if it were not to be open to new formalisms. And I actually think, I mean that we haven’t even started and we might get there. And I think the emergence point that John raised is very important. I don’t think we have a clue how really, to theorize about things like the brain and we’re still in the descriptive phases that that’s, that’s great. Lots of opportunities. Where  

Paul    00:26:55    Do you think we are in complexity?  

David    00:26:57    But again, you know, I think that it’s a brand new area. I mean, think about things like the use of maximum entropy approaches at Jane’s first pioneered in statistical physics that people like there’ll be a lack of use. So effectively in looking at spiking, um, that people at John Hart, you so effectively took ecology. Look at scaling theory, that’s been so effective in looking at allometry. And so this, this is, you know, a couple of decades. And so you’re right. We’re embryonic.  

John    00:27:32    And does it still does seem like it is a, it is diffusing over for that example just gave from, from physics into other diseases.  

David    00:27:39    That’s absolutely, but, but also mathematics and, um, enlarge it. Uh,  

Paul    00:27:48    But yeah, you know, I asked about it about certain principles transferring to new domains so that when you’re exploring a new domain, you might know what to look for, you know, and you just mentioned, um, scaling laws and, you know, you can think of something like scale free distributions and how they are common among all sorts of complex networks, adaptive systems, and, you know, indeed you find them in the brain. And I’m wondering like how many of the, how many of these different principles are there that we get, you know, from a table, go look and think scale free. I should find that at this level, when I look at the, um, you know, mechanical level or something at the spiking neurons, I should find a scale free distribution and aha. I do. And you know, how many of those types of things do transfer across complex systems? I mean, I, I know it’s very,  

David    00:28:33    I’m not, I’m not a lover of that kind of work, to be honest. Um, I’ve used, it was very descriptive and phenomenological. In other words, it’s true that it’s intriguing. Of course there was a huge brouhaha when some of our faculty actually got very involved in small world networks. It was then small world. And now of course it’s, it’s, it’s fat tailed and it should be something else in a few years. And I really, yeah. Right. But I have nothing against that, uh, because it’s, but it’s really just shining a flashlight in an area that we have to think about much more carefully empirically and with much more fine-grained models. And so, um, I don’t like papers that pretend to report an insight, just they doing fancy statistics, which is all you’re talking about, by the way. So, I mean, when people like pair back and others got interested in, you know, Paolo distributions around self-organized criticality, they provided a model, the San pan model. Now it turns out that same time model was wrong, but nevertheless, it wasn’t enough just to describe it. So I think that some of that, what tends to give complexity a bad name, because it’s a bit superficial, it’s where you begin and then you go and do some real experiments and, or generate some real theory.  

John    00:29:53    I mean, somewhat it’s somewhat analogous. You know, you talked about Olaf and, you know, Danny Bassett who, you know, there are a lot of these metrics that can be applied to brain data, but sort of to what David said, they tend to be very atheoretical. And other words, you get a lot of descriptors of the connectivity, but unless you have a question to us, why should it be this versus this it’s a beginning. And, you know, Danny herself has sort of admitted how does one go beyond increasingly sophisticated, descriptive statistics of networks to something that begins to sound like an explanation of something,  

Paul    00:30:38    But you have to open the bag and see what’s in the bag before you can theorize. Yes,  

David    00:30:43    But I just want to, but you know, Danny, like John is one of our professors. And so, and I, a lot of, I know her very well, but I would say that Dan is a great example of someone who did precisely that. I mean, she said, look, there’s some tantalizing statistical evidence. Now let’s go in there and, and do control theory and do some experiments and do good science. I mean, there’s nothing surprising about that. I think if you stop at the level of phenomenology it worst, it’s numerology, it’s just finding patterns that are meaningless because they fall out of some very simple central limit theorem. So, you know, oh, wow. I found all these Gaussians yet. When, of course you have, because we know what happens when you add up random variables. And I think that the, we have to go further and I think a large part of what’s happening in complexity science that is that community of researchers interested in these adaptive networks have been doing just that.  

Paul    00:31:37    I mean, there is just, there’s been a siren call for more theory in neuroscience now for, I don’t know, a decade, I don’t know when the, I don’t know when it really started to gain volume, but, um, uh, the pushback to that would be, yes, of course we need more theory, but we also, you know, we’re, we’re just seeing data. So you could think that it’s jumping the gun and expecting too much, but we do need more, you know, any, anytime I ask anyone that question, it’s always, yes, we need more everything is  

David    00:32:06    We need more thinking,  

Paul    00:32:08    Okay, well, let’s start thinking. So, but I mean, what, what do you mean by that? Because thinking to me means, okay,  

David    00:32:15    Not necessarily, not a chore, in fact, no, I don’t feel that way. Look, I mean, we’ve all been to talks right? Where we are sort of doused in vast quantities of visual data and you’re sort of left drowning on the one hand in all of this information, but with no question being asked, no lucidity and the way that the problem has been framed, and one Marrickville man, who one of the founders of this Institute was extraordinarily critical of talks that did either of one, two things, a just present your data as if that was somehow science and B just did math. Cause that’s not science either. And I, I do think that there is a kind of complexity to the scientific enterprise that you have to be prepared to spend time to think deeply about difficult problems and not look at the teacher for a bitch and you know, or have data and theorize it sort of laziness, I think. And it goes into this whole sort of machine based science that you just take the human out of the loop. And I think just thinking carefully and collaboratively for a long time, without a paper in mind is a very good idea.  

John    00:33:32    It’s a trade off. There is a trade-off. In other words, you know, there’s a wonderful book called James bridle called you dark age, where he gives a very bleak description of the world that we’re currently in, but he gives beautiful examples of how utterly useless surveillances. Right? So now that all these cameras and all this data, right. And it doesn’t work, it doesn’t prevent anything. Right. So in other words, there seems to be a trade off between collecting data versus actually slowing down and having a think. Right. And so yes, you can say, look, send out all your hounds have pluralism in the way that you do science. Okay. But don’t say that when really it’s an excuse to never really give a critical subversive talk ever. And, and I think, I remember I’m trying to remember it was at a Gordon conference, right? The last one, and I gave a talk.  

John    00:34:35    I think it was, I can’t remember someone said to me, you know, John, that’s a very different kind of talk to what I’m used to hearing is actually very interesting because I don’t think that way anymore because I feel well, that’s the way I give talks. And I think that’s the way they should be given, but it was extremely interesting. You said, you know, usually people show their data, right. And it showed data, but he says that, but he wasn’t saying that I didn’t show data, but he said, that’s it. And it’s very difficult to grasp the context what’s at stake. Why does this matter? How does it relate more broadly it’s as though none of that matters now of course you could always say that they could, if they wanted to, they could give all that context. And, but actually I’m not so sure. Right. Because synthesis is not something that is in any way taught or promoted. And so of course you can get into sort of discussions about what counts as thinking, but you kind of know it when you see it and you detect more often than not that it seems to be absent.  

Paul    00:35:43    Not everyone has the skill to synthesize. I think that it is a master skill that yes is under developed in at least in me and, and across the entire population, uh, broadly. But I think that it is one of the more important, and maybe underappreciated is maybe what you’re saying skills. Cause it’s hard.  

John    00:36:05    It’s also about, you know, I was reading, um, all of a Sachs’s essay on Darwin when he came back and his botanical experiments on all kids. Right. It’s an incredible essay. And what, what that essay exudes is the insatiable curiosity machine that Darwin was. Right. And then it’s just like, he was a scientist out of every poor, right. He just getting down onto the lawn to sort of look at the all kids. And I mean, it’s, it’s just this kind of question asking curiosity and experiment in one’s head as well as literally. And just that essay by Sachs on Darwin had more science in it than I’ve experienced at most talks. Now I didn’t know what that thing is necessarily, but you want that back,  

Paul    00:36:59    But so there’s a disciplined patience that comes along with that at least with Darwin and perhaps sax. I mean, is that part of the special mix?  

David    00:37:08    Yeah, it’s an interesting question. I don’t want to keep us away from some of these deepest scientific questions, but I don’t think we can avoid recognizing that the industrialization of science, the sort of the penetration of thought by economic considerations and the obsession with citations and each indices, you can’t imagine that that doesn’t compromise the quality of the enterprise. Right. And I think that what John is describing, I don’t think all of sacks give a shit about, uh, a citation. I don’t, if he, I don’t think he would even know the word, he probably thought it was an aircraft or something. So I think that the, and I think that’s important for everyone to bear in mind that it’s a complexity problem, right? Culture, the economy bears on how we reason and the way in which we produce science. And it would be nice and perhaps a bit idealistic if we could return to communities that were slightly less obsessed with the weight of paper and more interested in the quality of the concepts.  

Paul    00:38:17    Well, that’s what SFI is fundamentally  

David    00:38:19    As well. What has to be, I mean, it fails very often, but it wants to be. And certainly from my point of view, supporting an Institute like this, I’m absolutely committed to that and we’d go to the cross for that. But, um, but on the other hand, we live in this world that has these perverted.  

John    00:38:36    We had, we had a meeting at SFI where these two worlds collided pool last year, where I very much felt like it was like the fable of the store and the Fox where they invite each other over and then it’s impossible to eat the other’s food because it’s the wrong utensil. Right. And I felt that was a meeting held last year on the brain, actually at SFI where these two kind of ways of trying to talk about a subject went up against each other. And I’m not, I’m not actually trying to be, you know, uh, sort of bitchy for the sake of it. It was, it was really quite stark to see the discomfort in two very different ways of talking about the same subject, you know, wanting to be more broad and more abstract, maybe a little bit more formal, trying to sort of look across different areas versus let’s stick to the data. Let’s know what we know. I mean, it was very stark now again, one, shouldn’t say that there’s one type of science only, but neuroscience I think would benefit greatly from relaxing a bit and going for a walk and thinking things through, across disciplines rather than in this mad rush towards publication and data collection and substituting, whatever science is, which is another hard thing to define with all its ancillary subjects, whether it’s statistics or do you see it it’s if they were trying to do everything other than the science itself,  

Paul    00:40:07    I mean, that’s an institutional problem. There’s a lot of pressure on people to publish. Do you guys know your H index?  

David    00:40:12    Yeah. People remind me and it’s disgracefully bad. I think mine people who are being mean to be telling me mine.  

John    00:40:20    Yeah, no, no. I, I, I do not want to know and I don’t want to talk about  

Paul    00:40:28    Yeah. Not to celebrate naivete, but I don’t think, I, I didn’t know what an H index wise until I was, I think it was a post-doc maybe. And then it’s appalling when someone tells you  

David    00:40:38    Yeah. It turns out it’s just the logarithm of your citation factor. So it’s kind of a it’s, it’s hilarious. So a lot of fascinating  

Paul    00:40:45    And nothing, but I’ve had colleagues just sort of stay on the H index page is my H index up today. So, you know, what do you mean? It’s, it’s a career as well. You know, it is a career so, well, we can put these, let’s put these aside and talk about brains and minds. How about  

David    00:41:01    I, I do, I do want to talk about, and I guess it will be in, in relation to brains, this issue of emergence. I think it’s,  

Paul    00:41:09    Let’s, let’s just start off with it then. So you, you guys have mentioned emergence a few times now, and so this is pressing on your mind. What is it about emergence that is  

David    00:41:18    Talk about it generally, um, more formally perhaps, and then John will explain why to him, perhaps for me to why it’s important in brain in mind, emergence is another one of those words that generates a huge amount of confusion needlessly. And so let’s just make it very clear for everybody. So here it is, and that I accept that there are many definitions. I’m not going to define it. I’m going to talk about its operational value in relation to what we’re going to talk about. And that is that there are core screened theories that are statistically and dynamically sufficient and understand what that means. It means that there are aggregations of variables, which are principled, typically averages of some kind, which are better predictors of their future selves or as good. I should say that really qualified as good as any additional microscopic information would be in statistics.  

David    00:42:15    That’s called sufficiency in dynamic assistance, dynamical sufficiency. So in other words, you don’t get any additional predictive benefit at all by including more microscopic data. And the question then is when is that true? And when is it false? And I just want to give a very simple example water. If you want to understand the laminar flow of water, you don’t need to go to the microscopic constituents. You just have Newton’s second law, ethical CMA applied to fluid in Scandinavia Stokes equation. And it deals with macroscopic observations and measurements, density, pressure viscosity. Okay. If you want to understand the boiling point of water, that theory is useless. And then you have to do the theory of phase transition. So it could land out theory and that’s all expressed in terms of microscopic Hamiltonians, right. Energies of microscopic interactions. So according to what you care about, you use, these are the effective theory, the average theory, Navier Stokes that is not improved at all by including the microscopic, or you need the microscopic to explain the property of boiling of water.  

David    00:43:25    And the reason it really matters is because the macroscopic theory has huge advantages. First of all, it’s computable. So it’s completely positivistic this remark. You can’t do it with any computer, the size of the universe, if you wanted to include all the microscopic detail. So that’s just practical. Learnability when it comes to the brain, right? In other words, there are so many free parameters in the microscopic description that you’d never learn them, right? So there’s a learnability constraint, which is analogous to the computability constraint. And the most interesting one is the observability point, which is you wouldn’t know what macroscopic property, you need a microscopic description to describe in other words, unless you had it first. And so that’s a much more difficult when it’s a very top down concept that you can’t get to the macroscopic from the microscopic, you have to have an observable Pryor. So those are very practical reasons why it matches, uh, above and beyond the concepts of sufficiency. So I just want you to put that in the background,  

Paul    00:44:28    Is this all fundamentally due to the fact that, I mean, you don’t have, if you did have a computer that you would need a computer, the size of the universe, uh, cause otherwise, I mean, you have to simulate things at the microscopic level to eventually actually understand them and you physically practically cannot do that.  

David    00:44:47    We’re actually on two and it would be absurd. I mean, just the example I like to give is mathematics, right? So mathematicians proof, proof, theorems, just go to a math journal, look at the great mathematicians, Pitt punker, Ray, you know, uh, Grossman and Deek. You know, you pick your favorite. No. Where will you find any reference to psychological states of mind firing patterns of neurons, dopamine receptors, electrons, neutrons, or quarks, it’s not considered important and it’s not right because at the level of a mathematical proof, mathematics is sufficient furthermore, right? It would be in computable and non computable from the point of view of the atomic structure, you wouldn’t know what to observe and it would be not fundamentally non learnable as a discipline. So it’s, it’s, it’s moot to me. Uh, emergence is a fact. And the question then is whether you’re dealing with the laminate flow of water or the boiling point of water. And I think that’s really the interesting question when you do have to go down a level and, and for many of the systems that we study, I’m not sure we know.  

John    00:45:59    Right. Exactly. I think what people get confused about is what you were sort of hinting at Paul is when is it just a sort of epistemological limitation or when is it ontologically true? Now I would say, and David can correct me if I’m wrong, is mathematics and proofs in mathematics. So ontologically independent of those other things about the world, knowing about atomic structure is simply not relevant to mathematical truths. It’s not that they, if you have the computer all the time that they would add in any way they don’t. Okay. So I think the question is when you talk again about Phil Anderson’s disciplines, are they ontologically true? In other words, does the explanatory structure of the universe for ontological classes or are these just our cultural and the pistol, the logical failings that sort of splay out the way they do, but there are only a few true ontological objects, presumably down the level of physics, right.  

John    00:47:10    And everything else is just derived. And if we have a computer, the size of the universe, we could do away with all the disciplines. Okay. Now, in a way I don’t even care if one has to decide when it’s ontologically true or epistemologically true because you want actually get some work done. And so, you know, I think there’s a philosopher Stevens who gives us great discussion using evolution to talk about the independence when he talks about, if I remember correctly, a wonderful description of the shape of the marsupial mole and the golden mole. And these are two malls with completely different evolutionary histories. And yet they have they’re blind. They have snouts, they have FIC for, and they have claws. They they’ve converged evolutionarily on the same solution to digging through the earth. And that’s the explanation for their body shape. It’s adaptive to the environment they live in. Now, it would be very odd if you were to say, I need to explain these two moles by going into that developmental history or not, or how did, where did they come from? In fact, you’d be detracting from the actual explanation going into details, which will be different because they have completely different stories evolutionarily. Doesn’t it depend  

David    00:48:40    On what  

John    00:48:40    Satisfies you as you want to know why they have the shape that they have. And that, and the question you’re asking is that contrastive question, right? It would be very odd to say that that explanation at that level of contrast, why do they have the same body shape you see? And I think that’s the, I think to go down lower, it wouldn’t add very much other than that, unless you want to just say, why can’t they both have a worrying rotor instead. Okay. But that’s a different question. Why can’t they have a worrying screw? Yeah. That’s about constraints. And what’s available. That to me sounds a little bit like the boiling point of water. Why couldn’t they have some other structure to drill through the earth, but if you want to know why they share that body shape, given biological constraints, that’s enough, it’s adapted to going through the us.  

David    00:49:32    Well, you know, it’s, I do think it’s interesting. Every question is susceptible to both forms of inquiry. And if John were to look at those moles, he’d find that under the surface, they both had painted actual limbs as do dolphins and whales. Right. And so if that was the thing you cared about, this surprising homology that isn’t explained by selection as explained by common descent. And so I think it’s always going back and forth. I think I suspect, I don’t know John, if my criticism is that there is this belief and I’m not sure quite where it comes from that the most fundamental, the truest description is the most microscopic the most reduction.  

John    00:50:18    Yeah. I mean the other example, that’s given a lot by philosophers. I don’t know we spoke about this before. Paul is about causal contrast, which is there, is that what, you know, the one that’s given, I think Carl Craver gave it first is, you know, why did Socrates die? Right. And you know, somebody might say, well, Socrates died because he was condemned condemned to death by the Athenian authorities, the corrupting youth. Okay. Somebody else will say he died because he chose to drink the hemlock rather than to go into exile.  

Paul    00:50:48    Well, I thought, I thought it was a Caesar analogy. He died by a metal spike in his chest versus the Senate.  

John    00:50:55    So then you can say, well it’s because, you know, he dragged hemlock rather than English breakfast tea. And then you can say, well, how not operates on a certain part of your body? Now, the point is, is that all ontologically equal in terms of efficacy has explanations, but neuroscientists to the point that they would make will think that if you can work out the mechanism by which the hemlock makes you stop breathing, that that’s the best explanation for the death of Socrates. And it just isn’t right. And I think that’s the point is that there’s this strange belief in neuroscience that there’s a foundational, privileged causal contrast, and it, everything else will ultimately devolve to that causal contrast. And that’s the odd thing, whereas physicists absolutely fine having Navier Stokes equations for fluid dynamics versus having phase transitions and see that those are just different regimes of explanation.  

Paul    00:51:58    Isn’t ontology fundamentally out of our reach though.  

David    00:52:01    I don’t think so. I don’t think so. I, um, you can be Kevin chin. No, I know where you’re going, but here’s the next nice thing, right? Which is that it has to do with degeneracy that whilst it’s true, that our representation of the world might not be identical. I mean, I should explain what that means by the way. So I just have to formalize. This is because John and I argue about this all the time. I want to make sure I’m being clear. Say it sounds some structure X, we would call Y a representation of X if Y is the image of X under some structure preserving map. Okay. And so think about, you know, retina Tropic maps or motor sensory, her monkey li they’re all wise, right? And they lose intonation and X, but they preserve some structural feature of X, some geometric, some type of logical.  

David    00:52:55    Now, if that were not true, poor selection would be ruthless to us and remove us from the world. So it doesn’t mean that we are identical, why next are not identical by any means, but why has to maintain something, which is absolutely true about jacks? That’s the sense of which I don’t. I do believe it’s possible for ontological unity, not, not disciplinary unity. I am with John, but I don’t like this idea that somehow, because we can’t know exactly the world other than mediated through our senses and our instruments, that means that we know nothing about the world. That’s just false.  

John    00:53:31    That’s a very important point that there must be some similarity transform that as possible, because if there weren’t, I mean, that’s, what’s so nice. I think about strengthens is mole example is it’s what evolution is working on. It’s almost an example that evolution has given to us, right? That it’s actually converged on this body shape to burrow through the earth. Right. It’s sort of demonstrating to us that there is something ontological that it’s operating on because otherwise there would be no survival. So in other words, we have to believe that a mapping is that what David I think is saying is there has to be some mapping, right? Of course it has to be.  

Paul    00:54:16    So this is, so this kind of comes around and we’re going to, of course, which is totally fine with me. Um, but this kind of comes around to David Deutsche’s. To me, it kinda comes around to David Twitch’s conception of what explanation is. Um, I don’t know if you guys are familiar, but it’s along the lines. That explanation is, um, just the latest thing that’s hardest to remove and a better explanation is harder to remove as the explanation and, uh, between the epistemology and the ontology that we’re talking about. David, you’re saying that there’s some must, there must be some truth. If there is a mapping that does it here over time between X and Y. Uh, but, and I might be misunderstanding what ontology is, but then I would say that we only have epistemic access, uh, to that and not ontological excess.  

David    00:55:04    No, I understand. But again, this is the critical point about this sort of structure, preservatives to mathematicians call these homomorphic isms. And the critical point is it’s, I’m much more optimistic in some sense, because once it’s true, what you say by definition, we are instruments that make, make measurements, it’s physiology. Everything is right. Um, but, uh, as John pointed out, the, the behavior that ensues and those measurements has implications in that same real world, right. Which doesn’t really care about our Vista Margie, but it does care whether we can fly or swim. And so that maintains an ontological through-line and, um, it doesn’t matter to me whether or not we achieve identity. It’s just that we achieve some kind of structural correspondence  

John    00:55:58    And that’s well, you see, it’s very important. It’s deeper than that, Paul, is it it’s you have to decide, I think you had a really wonderful person on your show. I’m difficult to pronounce her name.  

John    00:56:11    Rita Terremoto. Yeah. She was very interesting on this point where she said, do not confuse the, the, the goals of science, of trying to seek the truth versus trying to seek understanding they’re not the same. Okay. Right. So I think she, you know, she was right about that. Um, it gets a little bit, which we’ll get to later about a piece of David wrote recently is that there’s probably a sort of veracity understanding trade-off and wouldn’t it be interesting if getting models that sort of difficult to understand that they fit the world better is a different discipline to science, which does have to have understanding in it, in my view to be called science, but there may be another discipline that may be closer. It’d be a better fit to the world, but will be opaque to us. And so if science is going to have anything to do with what, who, the people we’ve been discussing so far, then I think understanding must feature in it. Otherwise I think I’ll just give it another name.  

Paul    00:57:28    Well, this is an issue is I think that, so, so the philosophy of science or understanding in the philosophy of science has sort of exploded recently. And, um, actually John, you turned me on to how’s the chains, chains work, who turned me on to, uh, um, direct,  

John    00:57:43    Hey, actually I turned you onto hunt direct as well. Actually. That’s fine. I’m very glad. And actually he was mentioned in that podcast too. Um, yeah.  

Paul    00:57:57    Yeah. So she, so there’s like four other, you know, recent books on the different natures of understanding and our conception of them. And you guys couldn’t have heard this, but I just had, uh, Jim DiCarlo on the show who has sort of headed up this modeling the ventral visual stream, um, in feed forward deep networks and now recurrent deep networks as this hierarchical system. And it models on the brains really well and predicts brain activity very well. And now he’s controlling, uh, brain activity using the models to generate images, to control neural activity in the brain, by presenting the stimulus to the subject and his conception of control, which was fun because he’s pretty excited about this. He sees control and prediction as the same thing, which is understanding that.  

John    00:58:43    Yeah. I mean, I know, I know Jim, and he’s fantastic and, but that’s a complete and utter cop-out. I mean, he’s basically, um, decided that if you can do control the prediction, that will be the new understanding to kind sure. I mean, if you want to give this new name, but it’s much more interesting to me about what direct and others say, and, you know, in line with what Fineman and many others have said is that you need to have an international theory to build explanatory models of the phenomenon. And then you should be able to do intuitive work with that intelligible theory to generate explanatory models, to explain the physical phenomenon. And if you don’t do that, right. And you know, it’s the Dirac idea that we discussed last time I was on which you should be able to see the implications of the equations without having to go through the full derivation.  

John    00:59:38    You can do intuitive work. Okay. And science is disability. And maybe it’s your own way of developing an effective theory that you could work with to generalize, to do new experiments to that science. Right. If you don’t have that, then I don’t know how the, it proceeds another other than saying, well, well, let’s just stop doing that kind of science. And let’s just do, that’s just do deep neural nets and be modeled free. Right. But that’s not denying that we’re losing one thing over the other. And that there’s a trade-off because there is a trade-off.  

David    01:00:19    Yeah. I don’t, I mean, I have a lot to say about this pool. Sure. Take your time. Well, I agree with John, of course. I mean, whatever the other chap said was just ridiculous. Um, and if he wants to just make those two words have the same definition, the dictionary he can, but I rather have a dictionary of more than one word in it. So let’s just look at prediction. So let’s just make it again, make it simple. I find these things useful. So prediction is very simple. It’s just getting an input output relation correct. Out of sample. That’s the prediction. So if the input is time and space, you tell me temperature, that’s called weather prediction and you know, it doesn’t matter. That’s what prediction is. It’s an IO relation that works now knowledge, of course, the facts that go into making the IO work.  

David    01:01:09    But knowledge also goes into understanding. And I want to talk about understanding, and I know John is interested in this. I cite some work by John actually on this. So for me, understanding is the communication of why the IO works or the construction of the IO in the first place. So communication and construction. And let’s just make this quick, quick as everyone who’s listening to. This knows this already. If you are taking an exam and you copy someone else’s answer who, you know, always gets A’s right, that predicts success for you. You understood nothing. You used a simple rule and it worked okay. What most good teachers ask is once you’ve produced a result, whether it’s right or wrong, I say, how did you get there? Exactly. Why did that work? Why does summing up rectangles in a lineage? Give you integration. Anyone can use an integration formula, but not everyone can explain why they work.  

David    01:02:05    That’s the difference in a good mathematician and a crappy mathematician, quite frankly, or a good scientist and a crappy scientist, a good teacher and a trappy teacher, a good student and a crappy student. We know that. So there is much more than prediction, right? There’s much more than knowledge and understanding is tricky. And philosophers has commented on this. And of course, John sell most famously in his thought experiment of the, of the Chinese dictionaries in the room. And that’s been of great interest to me. And there’s a lot to be said about this. I think John’s cell gets a lot wrong. Um, and I do want to mention something John has done and my thinking on this topic. So again, understanding goes beyond the IO map. It goes to the explanation for why it works or how you construct when they’re FX.  

John    01:02:49    And just one thing on that before David, you know, I think it’s not to knock, obviously people like Jim to call on all the people who, who, who build these impenetrable, you know, IO networks, right? And their understanding finds other places to land. You know, how, how do you build a cost function? How do you play with the architecture? You know, what is the learning rule? There are other places where people show great understanding, but the thing itself, can I explain the performance of this system? Um, they admit that they can’t now that’s fine, but don’t buy because that’s true. Say I’m no longer going to consider that a problem that I can’t understand that piece. Okay. You just, just live with it now. We’ll, we’ll get, we’ll get on a little bit longer to what I think is a solution to that problem, because I’m not completely satisfied that you should just go when it comes to deep neural nets, it’s over parameterize. We’ll never understand why it’s this way versus that way, in terms of the connection strengths. And therefore let’s just fall back on having known what the cost function is, having known, what the learning rule is and just accept that it asks them tote. It learns it  

Paul    01:04:05    Well. So that what you just said was put forward as the beginnings. Uh, we, we won’t, we don’t can’t understand things yet. So let’s start with learning algorithms and objectives.  

David    01:04:15    We have to be careful that that that’s not what, I don’t know, your previous speaker, but, um, that’s not what you communicated. He said that. No, no, no. So this is a better prediction is understanding, which is patently false. I do want to give a bit of a history though, because we’ve not, we’ve been here before. Right. And we were here at the origins of science. And the example I let you give of this is Francis bacon in the Nova Morgana. And he makes this beautiful remark. He says, it’s very difficult for humans to draw straight lines or circles. So we use rulers and we use campuses. Okay. So we use tools and in some sense, they subvert our abilities and deep learning is just, you know, compass prime. Okay. Now, interestingly, that the tool that Newton developed with light and this was the calculus, well, they didn’t really actually, that’s a kind of falsity, but the fundamental theory of calculus, the relationship between differentiation and integration and the application to orbits.  

David    01:05:12    And it’s quite interesting if you read the Principia, which I have not, um, but read reviews of it, uh, or summaries of it. It’s quite interesting that the method of Fluxions, which was, um, Newton’s name for the first derivative of space with respect to time and break for velocity, um, was not in it. He actually, when he wrote it, he, it was too arcane and he was too paranoid to actually disclose his discoveries. He actually presents his results geometrically because they were better understood than his new methods. So Newton felt that just predicting using his methods was not sufficient. He wanted you to understand why his theory of gravity, we could reproduce Kepler’s three laws and you did it geometrically. Now, when he went on to discuss the inverse square law, hooligans hatred, it, because it was non mechanistic, it wasn’t understandable positive action at a distance.  

David    01:06:09    And then Newton turned around and said, hypothesis on finger. You know, it’s good enough. It predicts really well. Um, why should I have to do that to J cart, hated it and came up with his theory of vorticies, which was much more mechanistic, which didn’t work. So right there in the early days of the revolution was a theory that was predictive. The author chose to present a method presented in terms of a method that people could understand geometrically, more familiar with that. Um, but was criticized for being sufficiently mechanistic and only too predictive by public hooligans. And then there was a suggested alternative. So that then continued into its limit with quantum mechanics and what we now know as the Copenhagen school, where people like bore and Paoli and Heisenberg disavowed, any intuitive understanding of the physical world, they hated it and replaced purely predictive mathematics and, and the famous expression of that position with them.  

David    01:07:12    David Merman, when he said shut up and calculate don’t even bother right now. So physics has this tradition. It’s no different from what’s going on now in Euroscience and machine learning, and it generated endless discussion back and forth. And now, as of today, there’s a return to fundamentals of quantum mechanics, where people are trying to provide, as John said, and understanding for why these methods work, Maryville, man, Michael, your kid, Jim Hart, or at the Santa Barbara couldn’t stand this. They want you to provide a comprehensible theory as Einstein had and Schrodinger had. So this seems to be a quite term universal feature of the scientific enterprise, that there are those who tend to favor predictive efficacy, even if it forfeits intuition and those who don’t like that and feel that the humanistic aspect of science, the creative aspect of science requires understanding, are they both necessary for progress?  

David    01:08:05    I think they are. I think it’s, I think they are. And I think what, if anything, John might be saying, and certainly I would be saying is that the, the extraordinary power of a predictive frameworks in the face of complex phenomena that have very high dimension, we could talk about that, uh, makes this much more complicated to argue, right. In other words, um, there’s something about a complex phenomenon that is so much easier to predict than understand that, that side of the equation, if you like gets differentially weighted. And, and that’s a bit of a problem, but only through only via, uh, simulation though, right? No. And I, in other words, you know, if the nice thing about fundamental physics, right, is that you could have your cake and eat it, right? So you could say, I’m going to predict, you know, with this theory to a hundred decimal places, and I can write down on one page the equations, which generate that solution, right. Which human brains can, can pass, but when it comes to, uh, projecting market trading or the spiking of that particular neuron, unfortunately it looks as if the representation of the structure, this why thing is a very, very high rank. Right. And that has lend itself in the short term to the utilitarian school, which says either prediction is understanding, which is kind of silly, or we don’t care about understanding because you can never accomplish it.  

John    01:09:37    Yeah. I mean, I think that’s absolutely true. I mean, it goes back to the question before, you know, this is exactly, you know, I was looking into this quite a lot and direct actually to have a whole chapter in chapter seven of his book on exactly what Dave was talking about with quantum mechanics, you know, the sort of Schrodinger equations and the wave function, being something that people could do intuitive work and picture with versus Heisenberg’s matrix mechanics being just churning out the numbers. Right. So it’s absolutely true that we’ve been here before. And again, it speaks to the parochialism of neuroscientists that they just don’t sort of look out their own discipline for the most part. Um, but I do think it’s true that one criticism that has occurred is some people would have said, well, the reason why we’ve had understandable neuroscience for quite a while is that we made such simple experiments that it was the simplicity of our experiments, that sort of reigned in the multidimensionality that David’s talking about.  

John    01:10:36    And as soon as you started going into the real world and with more naturalistic experiments, it got hopelessly complicated, right. It wasn’t just one buyer in a dark room, it was complex images, movies, things like that. Okay. And you know, you know, you had URI Hassan on, right. Who very much made the point that it’s just not rage to consider the human brain trying to overtly model the world. It just has so many neurons. It has so many parameters. It can just do direct fit. Okay. And that where it’s, again, it gets to the epistemological era of doing simple experiments that require representations, that lead to understanding when in fact you have disability to fit the world and you fit so much in the course of your life, that everything is interpolation. Okay. You only really need what David was saying, sort of out of sample, if your training set was so small, that the probability of you having discovered what you’re going to discover in the rest of your life is not present in the original training set.  

John    01:11:42    Okay. So there is a sense, not just that there’s a, there’s an option, but maybe it’s true that for the most part, because there’s no over representation, we shouldn’t come up with an intuitive understanding and just accept that you’ll fit in. And I think he gave the example evolution, right. That it mostly optimizes, but it’s not going to predict what the next species is going to look like. Right. It’s not like you’re going to predict what the next creatures morphology will be within constraints. So let’s just do interpolation with vast sampling and loads of parameters. Okay. And, but interestingly enough, he, he, he, he fell short of cognition, right. He admits that cognition is not understood that way. And, and, you know, and it’s very interesting that, um,  

Speaker 2    01:12:35    You mean how, how that maps  

John    01:12:37    On to mind. It doesn’t work for mind. He admitted that. And Geoffrey Hinton actually says that the last thing, the last thing that will yield when it comes to AI is cognition. Okay. So you’ve got this interesting thing that evolution of neural networks. And if you believe in, you know, cognition getting more complex as you move towards primate of a kind, both of them don’t yield to what’s happened so far in neuroscience or in an AI. And the reason I bring this up is in the meantime, we may need intuitive explanations of these phenomena to work with, even if those things are true. Right. So one of the biggest things that I would love to prove is David, you know, when it comes to all these things, hierarchy, emergency complexity, aggregation is I’ve always wondered whether there will always be some intuitive, understandable form of the question at some level, of course, greening, even though at some level it’s at the predictive level of the system, you won’t understand it.  

John    01:13:48    So in other words, you just accept, but there’ll be another way to talk about the phenomenon that will yield to more intuitive explanations. Okay. So you can have your matrix mechanics, but there’ll always be a wave mechanics way of talking about it as well. And maybe that’s just hopeful thinking on my part, or maybe it’s kind of true that hierarchical, complex systems that are Mitt details as you go up, have two secrets about them. One is that predictable and hard to understand. And there’s another form that can be intuitively talked about. Is there some principle there that you can have two flavors in any complex system?  

David    01:14:27    Yeah. And I, yes. I want to get back to CRNA a second. Cause I realized that I didn’t complete that or a Boris, but to John’s point, um, I mean I’ve always felt that there are two paradigms that are emerging now, right? There’s the fine grain paradigms of prediction that have these very high dimensional phenomena that are somewhat in compressible. Right. And so they don’t lend themselves to these intuitive frameworks, but then there are the core screen paradigms of understanding. Right. And a good example of that just to bring it to neural networks. I think John mentioned, this is Doug. We know how reinforcement learning works. Alpha zero has a very, very simple learning rule, right. And that’s the level at which understanding tends to operate in that case, which is we have as a community in DeepMind and their colleagues come to understand that there’s a very simple kind of learning rule that can train these very large elaborate structures. And however that the elaborate structure once consolidated is opaque. And, and I think it’s true. I think that understanding of complex systems might be a little bit like the learning bit and the projection is done by the structure. That’s,  

John    01:15:39    I mean, that’s, that’s very much the Lydia trap and cutting position, right. That they were, they said, look, you can write on, you know, on half a page, the reinforcement learning rule. Right. And the objective function. Right. And maybe that’s, I actually think I don’t actually fully agree with that is I actually think, um, there are psychological terms, um, that can be used to explain phenomena that neurally are too complicated.  

David    01:16:10    I agree with that too. So I started to make a point. Uh, so I, I think that’s right. I think the theory of scale that my colleagues here have been developing Jeff Jeffrey, Western others are a good example of these coarse-grained frameworks that also do extraordinary prediction, core screened prediction. And what John is suggesting I think is which I think is correct. And demonstrably, correct in some domains, is that not only is this, there is this kind of theory, like the learning theory, which is a theory that can be understood to train a network, but there might even be a core screen theory of how the network works. And, and that, that, that’s interesting. And I there’s been a lot of it as you know, I mean,  

John    01:16:50    And I believe that, I mean, I think that’s what I think, I think that cognitive neuroscience and psychology psychology is, you know, the way, you know, William James wrote his entire book, right. That defined or disciplines, it gets back to what we discussed at the beginning. These are these disciplines ontological or their piston illogical. And I’m just saying that, wouldn’t it be interesting if psychological terms just happened to be the correct course grading to discuss the performance of systems that at the neural level are opaque. Would that be more interesting than if they weren’t, I’m saying that it doesn’t surprise me, that there are, as David said, you know, averaged out objects that you can think with, and I’ll give you an example just to be very neuroscience. You know, when you look at, um, what Mark Church and Krishna Shenoy have done, where they’ve looked at motor planning and motor execution as a trajectory through a state space.  

John    01:17:51    Right. So they’re basically, it’s very interesting, you know, David’s talked about the fact that neural networks and dynamical systems, you know, that they have an intimate relationship to each other. There are dynamical neural networks, right. But it’s interesting that when you look at the work, that’s now very in sort of in fashion right now in science, where you take millions of neurons and you do dimensionality reduction, and then you look at a trajectory through a sub space, the dynamical systems approach. What’s interesting, but you know, but it’s very much about what the Schrodinger Heisenberg thing is that or Fineman diagrams, right? What fireman diagrams were another visual visualization tool that allowed people to have intuitions. Okay. So if you talk to mark Turton, I remember him saying to me as an aside, you know, I’ve started to look at so many of these trajectories through these, you know, state spaces.  

John    01:18:44    You can see them, he can see that he actually thinks with them. And so it’s, it’s very interesting that these are derived objects from neural data thousands, if not millions of neurons. But if you asked him, do I need to know the connectivity of each of these neurons? Of course not. And in fact it has to be not the case because in any given animal, you’re going to see very similar trajectories, but they don’t have one-to-one correspondence of, of their neurons. So the invariants happens at a level above those details. And you can think in that at the level of that invariance  

David    01:19:22    Yeah. I’ll give you a good, I’ll give you a good example of that actually from a totally parallel domain and that’s computer programming and the great genius of computer programming is Donald Knuth who wrote about computer programming languages and many, many other things, and makes the point that computer programming languages are the interface between the machine and human understanding. Human reason. They’re not a tool for programming. You can use machine code for that, right? You could write an assembler, no one rates in assembler, everyone rates in for dragon or C plus plus, or Python or whatever we’ll go, or whatever your favorite languages. And there’s no debate in the computer science community about the value of using high level programming languages, no one, there’s no kind of machine code zealot who says that the only way to really understand it is to rate in ones and zeros you’d be a lunatic. And so I think it’s the same thing. I’m sure. Well, there probably is, but I think that’s the same thing is that, you know, our high level, psychological cognitive, literary mythological abstractions stand in relation to reality as high level programming languages, standard relation to circuits, they allow us to not only understand them better, but program them better to control them better. It’s it’s I don’t see why that’s even controversial.  

John    01:20:45    I mean, just to that point, I think sometimes this is the heart of the matter is it’s not that there’s, um, hierarchical organization of a computer or nervous system, just so that we can understand it better or program it better. It’s actually for itself to control itself better. In other words, the nervous system has its hierarchical structure because that’s the way you have to organize a complex system. In other words, you know, the nature of the commands that go out of motor cortex versus premotor cortex are very different in terms of their details, the type of detail required to the level of the spinal cord in terms of that, you know, temporal and spacial dynamics, isn’t present higher up the hierarchy, you omit details. So that control, as David said is easier.  

David    01:21:42    Yeah. We, we, we called this system self observation. And so it’s absolutely right, because any adaptive system, that’s what makes them interesting by the way, is that theorizing about themselves, right? And so if, if you write software, that’s interoperable with other software. So, um, you know, I do everything in Emacs and I’m constantly adding Emacs extensions and, you know, they’re written in enlists, you know, and, and they can control this, you know, oh, S text editor hybrid, um, that’s the right level at which to operate. And so John’s absolutely right. That’s, that’s the great ingenuity of hierarchy, right? That once you operate at that level, you can control at that level.  

Paul    01:22:22    So bringing this back to, um, we can bring it back to any level, but I’m thinking specifically of the, um, thinking in dynamical state space trajectories like John, you mentioned, you know, March Churchland, mark Churchland told you he does, is that understanding? I mean, so, so there’s, this question is, you know, can you use your way, uh, when you’re using a tool like neural networks, like, you know, dynamical systems theory, can you use your way to understanding without just by practice, you know, and, and, and making it implicit in your conceptions.  

John    01:22:57    Yeah. So as you know, direct and others have made much talked about that. If you going to be able to do intuition like Dirac and firemen, did they look at the equations and know their implications without going through the derivation? That’s because they’re skilled at it. Okay. So the direct makes a big point that in order to be able to do that intuitive work with this level, of course, greening, you have to become skilled and practiced at it. And so what mark was saying in a sense is that he’s become skilled at thinking with trajectories, just like people can look at Fineman diagrams. Okay. Um, and so that, so that is, but that is understanding it’s very important just to get your first part of your question. If I’m understanding is having, um, intelligible theories to build explanatory models, to map onto real phenomena. So let me give you an example.  

John    01:23:49    When any of you ever talk about the stretch reflex and somebody asks you what’s the reflex, you will think of the muscle spindle, the one afferent, the signups with the motor neuron back onto the muscle. Okay. And then you’ll think of an inhibitory engineer on going to the antagonist. Okay. So reflex, when you think about it, intuitively has mural objects in it. You’re thinking about axons and neurons. Okay. Now, when you get to what mark Churchill is doing, and motor cortex of Sherrington had lived, you could have seen what happens when you take that approach. He would have realized that people aren’t worrying about how neuron a connects to neuron B connects to neuron C connects to neuron D in all that detail and motor cortex, it’s abstracted up too coarse grained measures. They are neurally derived, but now their trajectories are state-based. So yeah, I’m working, I’m working on this paper. I think I told you, it’s almost done called the two views of the cognitive brain with a fantastic neuroscientist and philosopher ponder called David Barak, where we’re talking about this. And it’s possible to have a functional explanation of high-level behaviors, where you can have these psychological terms and add to them a neural piece, just like the axons in the shell, in the reflux. But the neural piece is a trajectory in a state space. It’s a dynamical object.  

David    01:25:19    It’s quite interesting, John, there’s a, I’ll give you another example of this. And again, in another domain, and I was in 2018, a mathematician called Peter Schultz when the fields medal for creating something called perfected spaces, which have to do with Gallois representations and so on. And one of the things that he pointed out is that when he is trying to solve a problem, the first thing that he really has to do is come up with almost a linguistic definition of the mathematical problem. If he doesn’t have that, he can’t do the math. Sure. Okay. Now he’s not, I mean, this is a fields medal, that’s extraordinary mathematician, right. And he’s saying that I need to move between levels. Exactly what John just described. It’s not that he can, he doesn’t operate at the machine code of mathematics. Right. He moves up and down. And I think one of the really interesting questions I would have you and John and so forth is could we come to an understanding of how many levels we should have forgiven observable and what would be the theory that would tell us how to optimally allocate our attention across these different levels?  

David    01:26:22    I mean, that’s something I think, which would be a very much a complexity problem. It feels very meta, but I imagine the nervous system has  

John    01:26:29    Yes, exactly. And I just, I exactly the nervous system has to do it. And it would be against all the other things that I’ve learned sort of through the Santa Fe Institute and David and other convexity scientists, which is, it would seem very odd to me is as behavior gets more complex and the neuronal aggravates and Corky cortex get larger, that the explanatory object is going to remain a sharing Tony and style description of a circuit. In other words, it’s the idea that your level of explanation will never change no matter how more complex your behavior becomes, what a bizarre thing that would be. Right. So in other words, one thing I can be sure of is I don’t think that circuits in insects and in the spinal cord or the machine code, and the idea that once you get into cognition and cortex, that you’re going to be able to revert to that level of description flies in the face of what other hierarchical systems and complex phenomenon do it is that you have to come up with new objects that are more abstract and maybe, and it’s actually, you know, another point you’re talking about Olaf Sporns and Danny Bassett versus what mark turtle, I find trajectories and state spaces.  

John    01:27:42    Give me a feeling of understanding that connectivity metrics utterly fail to that’s right. They just go do it for me. And it’s very interesting. And, you know, I have huge respect for Danny. She’s super bright, but she’s when she’s written most recently about the kind of understanding and hypotheses you can test with connectivity, they themselves are couched in connectivity language, is this area autonomous or very connected to this area? It’s just connectivity language again. Right. Whereas I find dynamical systems and trajectories seem to be something that adds to the psychological terms.  

David    01:28:25    It’s interesting, Paul, this gets to your opening remark about that sort of, um, plurality of frameworks that we need to understand something complex. Right. And I think, uh, obviously, maybe again, in my position, I I’m very open-minded. I feel as if they’re all great, right. As all, as they all, aluminate a phenomenon. And I think the problem is always the reduction to one, this belief that there’s only one best way of doing things. And I, and I think John’s right. I think for many people, genetical systems work for other, you know, these combinatorial algebraic structures work and, and we as brains by virtue histories presumably work differently. And I think if anything, complexities are kind of liberalism, right? It says that’s allow for the possibility of a multiplicity of approaches and not assume one Israel.  

John    01:29:19    And I wouldn’t be that, you know, it’s interesting. I don’t know, Paul have, you’ve read the new history of neuroscience. Matthew Cobb’s book the idea of the brain.  

Paul    01:29:27    No, but isn’t it just a list of metaphors? I I’ve not read it.  

John    01:29:32    Yeah, but it’s actually, well, no, I think actually as a scaffold for thinking, and it’s very good. I love the hit the history part and the early present. Um, I think once it gets into current neuroscience and prediction of the future, it gets more impoverished. But I don’t know whether that’s Matthew cob or whether the field itself is sort of asked them. Um, but it is a good book. I really do recommend it. It’s got lots of delicious, rich stuff in it, and he’s done a good job. It’s not easy to synthesize all that material, but I tell you, what’s fascinating about it is that he has a section at the end of the book where he talks about the future. And it’s very interesting that he begins by talking about emergence, but then drops it like a bad smell, right? It’s like, well, he, I think he said something like emergence is that on the satisfactory explanation before we get to the real explanation, right?  

John    01:30:32    And then he moves on to where he feels like the real progress we made is let’s get back down to the circuits and the neurons themselves, let’s study cognition in a fly where we have the sort of Sharon, Tony and connectivity map. And then we’ll do some sort of extrapolation cognition in humans. In other words, you see this tension in the field between not really wanting to talk about course greening and psychological terms and derived measures and saying, surely we can avoid that awful fate for our field, by going into a fly or a worm where we can have the same level of connectivity, detail, and intuition as we did for the stretch reflex. But now we can apply that understanding to something that we call cognition and then somehow extrapolate from that higher up than your access. In other words, you see that there’s this tension that just won’t go away.