Brain Inspired
Brain Inspired
BI 182: John Krakauer Returns… Again
Loading
/

Support the show to get full episodes and join the Discord community.

Check out my free video series about what’s missing in AI and Neuroscience

John Krakauer has been on the podcast multiple times (see links below). Today we discuss some topics framed around what he’s been working on and thinking about lately. Things like

  • Whether brains actually reorganize after damage
  • The role of brain plasticity in general
  • The path toward and the path not toward understanding higher cognition
  • How to fix motor problems after strokes
  • AGI
  • Functionalism, consciousness, and much more.

Relevant links:

Time stamps
0:00 – Intro
2:07 – It’s a podcast episode!
6:47 – Stroke and Sherrington neuroscience
19:26 – Thinking vs. moving, representations
34:15 – What’s special about humans?
56:35 – Does cortical reorganization happen?
1:14:08 – Current era in neuroscience

Transcript
[00:00:00] John: 

I always am a bit baffled by people who deny either that there is such a thing as higher cognition, which is we can have a conversation about that, and then the people. And then, and then, and then the people who want to sort of concoct a sensory motor origin out of it.

There are many cherry ideas that continue to be used, not because they are likely to be what’s really going on, but because they are very amenable to formalism.

The current craze in AI is large language models, which are entirely based on a human product, right? So in other words, the irony is just delicious that suddenly here we are dealing with human language.

[00:00:55] Paul: This is brain inspired. I’m Paul. If you listen to brain inspired regularly, my guest today needs no introduction. So aside from his name, I won’t. John Krakauer has been on the podcast multiple times, and if you like our discussion today, I do link to his previous episodes in the show notes at Braininspired co podcast 182. Today, we discuss some topics framed around what he’s been working on and thinking about lately, things like whether brains actually reorganize after damage, the role of brain plasticity in general, the path toward, and the path not toward understanding higher cognition, how to fix motor problems after strokes, artificial general intelligence, how next time, we should both agree that we’re going to record our conversation for a podcast and plenty more. And if you’re a Patreon supporter, you get 30 extra minutes, wherein we discuss my own current research project. And John lends his thoughts to that. Bolsheoi spastiva Patreon supporters. It’s a little rusty Russian there. I apologize. Here’s John.

So this is a fun way to realize that you did not know that we were going to be recording a podcast episode today, because I said, hey, I’m about to hit record. But before I do, I was going to ask you something else, and you said, oh, I didn’t know we were doing that.

Welcome, friend of the show, John Cracker. Hello. How are you? Happy new year.

[00:02:26] John: Happy New year to you. Yes, I can tell this is going to be a harbinger of 2024 being full of surprises.

[00:02:33] Paul: Yeah, we were kind of just shooting the shit there for a while, and I thought, gosh, we should get recording pretty soon here. And then I surprised you by saying, hey, that’s what we’re doing.

[00:02:42] John: It’s always a mistake when you think somebody’s your friend.

[00:02:45] Paul: Oh, come on. I was about to ask you something very personal, which I’m now not going to ask you, at least in the recorded version.

[00:02:52] John: Yeah.

[00:02:55] Paul: So we had gone back and forth a little bit, kind of teased each other back and forth a little bit about you coming on again.

And I thought this would be a good time, specifically with a list of grievances. And I thought that’s sort of a euphemism for just topics that you might want to discuss. But I thought the 2024, this could be an annual list of grievances by John Crack hour, sort of recurring episode.

[00:03:24] John: Oh, my gosh. I kind of like that. Yeah.

[00:03:27] Paul: So we were catching up on how you’re globetrodding and starting some programs, running some other programs, designing a warehouse. There’s an interior designer in your actual house right now.

And we chatted the other day, and you were telling me, I’m not sure what you want to chat about, but you were telling me about a new manuscript that you’re working on, that you’re excited about. So really, the floor is kind of open to discuss whatever we want to discuss. And actually, I would like to pick your brain, because I’m back in academia and I’m swimming in a sea with lots of old pieces of wood that seem like they’re floating, but when I grab onto them, they start sinking down and don’t hold me up. That sort of thing. So if we get to that.

[00:04:10] John: Yeah, I mean, I’ve also thought about you. It must be, given your journey with the podcast and all the perspectives and all the reading and all the thinking you’ve been doing, it must be quite interesting to get back in the lab. Get back in the lab.

It’s like going to college again when you’re much older. Right? What is that line? Oh, if only I’d known the things I know now when I was younger. Right. So in other words, you’re getting another go around. It must be sort of almost like to hear your view on how different does it feel doing neuroscience this time around.

[00:04:51] Paul: That’s interesting. I am the oldest person in the lab. I’m the old man sitting among graduates and postdocs. I’m older than my advisor, Eric Itery. I’ll have him on a conversation on the podcast pretty soon.

Let’s come back to.

[00:05:09] John: I and I and I love.

[00:05:12] Paul: Even. Did you even know where I was working? I don’t know if you knew where I was working.

[00:05:16] John: Do you know what? I don’t think that I had clicked.

[00:05:18] Paul: You don’t keep up with everything I do, John.

[00:05:21] John: Come on.

Well, please say hi to Eric for.

[00:05:28] Paul: I asked, I was asking Mark Nicholas, who’s in the lab. I’m not sure if you remember, he’s a graduate student. If he had anything to ask you, because he had mentioned you. You get mentioned fairly frequently.

Another interesting thing about going back into the lab is that I’m sort of, like, known in that community, and that’s an OD thing. In fact, Eric told me that I was the most bashful celebrity he’d ever met, which I thought was an odd thing to say, especially because I’m not a celebrity.

[00:06:00] John: Yes, you are, actually. Yes, you are.

Not at all. You occupy a very important place in the ecosystem.

[00:06:10] Paul: Yeah. And now I’m trying to maintain that occupation as well as doing real science. And so maybe we’ll get to that later.

[00:06:17] John: Well, I would simply, if I may, just object to that.

I think I talked to you once way back about Hasop Chang’s sort of definition of philosophy of science, which he thought was complementary science. And I think that you are doing real science, having the conversations you’re having and playing the role that you’re playing. And I mean that sincerely. It is not true, in my view, to call, oh, gathering data in the lab is real science, and having the discussions that I’ve had, they’re both complementary forms of science, is what I would say. And when everything is sort of when the dust settles, I think it’d be very interesting to see where the greater percentage of the variance of your influence has been.

[00:07:04] Paul: Oh, I can guarantee you the greater percentage has been through the podcast. I mean, that’s not even a question. But that’s as much of a knock on my own lab science.

[00:07:14] John: Again, I know that’s the way that people think, but I actually think it’s non pluralistic to take that view. But you know me about that.

[00:07:27] Paul: So what is new with you? What do you want to chat about before we go down the rabbit hole of my own problems?

[00:07:33] John: Well, I mean, as I was sort of saying to you off the recording, my interest at the moment, what I’m sort of doing is sort of sheringtonian sort of physiology and psychophysics.

Looking at the phenotype of hemiparis. It’s really amazing, right? That stroke is what gives you the most common motor disorder, and you can make smooth, reaching movements and prehension, and we all do it. We’re doing it now with our coffee cups. And yet, if you broke that system, I don’t think any of us would guess how it would fall apart into the bits of abnormal features that you see, right? You get weak and you lose dexterity, and you have spasticity and you have synergies. And it’s just very strange to try and guess how the sim system has been assembled by looking at the way it falls apart like humpty Dumpty after damage. And so I’m finding it really interesting to sort of try and dissect the behavioral phenotype better and better and then try and map that onto physiology and anatomy. So you would laugh, I think, Paul, a very mechanistic, know, hardcore project, and yet that’s exactly what it is. And I’m very lucky to have people, younger people who have grants now with me and with others also working on this wonderful young scientist at Harvard, David Lynn, who has a grant working on this, someone called Ahmed Iraq, who’s doing 3d reaching studies, markless tracking on the phenotype with us, a former postdoc of mine who’s now chair of cognitive science in Israel doing it. So in other words, there’s a real, I think, interest in trying to do old school, mid 20th century behavior and physiology on an old problem with new tools. And I’m finding that very exciting and very interesting.

[00:09:40] Paul: When you say sharing, Tony, and let’s just make sure that people know what you’re talking about. You’re talking about Chuck Sherington, but when you say Sharingtonian, you mean kind of a circuit based approach. Area x connects to area y, what happens? And then in your case, sometimes you’re lesioning, creating a stroke and or studying strokes in humans, right?

[00:10:02] John: Well actually yes. In other words there we’re obviously not lesioning them on purpose, but with a really fabulous primate physiologist, Stuart Baker in the UK, we have a grant where we’re trying to create a new model of hemiparasis going beyond the classic Kuipers and Sarah tower papers from the mid 20th century, and where mainly they were making pyramidal lesions and they were not seeing some of the features that are very prominent in humans. So there’s sort of a mystery as why even is our closest relatives not showing the full panoply of abnormalities. And in fact it made it be interesting to listeners that the term extra parameter, which is what people use now to talk about movement disorders in neurology, came from the fact that you couldn’t see the positive symptoms in the pyramidal lesion monkeys. They didn’t have spasticity, they didn’t have synergies, they were just weak and lost dexterity. So it led you to a bait. Well where are those things? They must be extra parameteral. I see, right, yes, exactly. It came from being puzzled by the absence of the movement disorder in the stroke model in the monkey.

And so we’re trying to address that mystery.

And it also leads to a real rethink about the cortical spinal tract and how utterly dependent humans became on it. And the way I like to talk about it is the cortical spinal tract, which is the tract that is the cause of hemiparis disaster. Stroke, when it’s lesioned, is not just a control of muscles, it’s a control of all the other controllers. So the disassembly that you see is a manifestation of its loss of control over many, many centers along the axis, the neuro axis, right.

[00:12:08] Paul: Are the varieties of those phenomena, the behavioral phenomena, are those varieties perplexing? Are they systematic? Because what you’ve seen is like, given a stroke, it’s interesting to see the varieties, right, of different behavioral outcomes.

[00:12:25] John: Well, I mean, it’s a bit of both, right? On the one hand, you see these features over and over again, and yet they are weighted differently and they appear at different times after the lesion. So you’ve got a mixture of, it looks like the underlying equation, if you could only find it, is the same, but the weights on the parameters are different.

And then, of course, in terms of all the potential ways that you could do it, the null space of potential connections, in other words, is it something about the brain stem? Is it something about the spinal cord? Is something about the connection between the brain, the brain stem and the spinal cord? In other words, there are many either or possibilities that you can entertain in terms of connectivity and strength of connections, and you’re just going to go have to find out. So, in other words, you’re absolutely right. It is about very much about what’s connected to what and what’s the strength of the connection and what’s up regulated. And it’s painstaking. And of course, we don’t in humans have good ways yet to image the brain stem and the spinal cord. So that’s why people have been looking under the light for so long with non invasive methods, cortically.

But it’s a huge undertaking, Paul. Right. It’s basically a decades long project to better characterize the behavioral abnormality, begin to get a better insight into what its generative mechanisms are. It’s really old fashioned, cool neuroscience.

[00:13:48] Paul: I think you started this off by saying that I might be surprised to hear that you’re involved in this kind of a mechanistic approach, because a lot of what we talked about in the past has been, I mean, you have an interest in higher cognition, right? And thinking, at least in terms of higher cognition. And you can correct me if I’m misspeaking for you here, that that kind of circuit a goes to node b goes to node c. That kind of thinking, that sharingtonian approach is not going to get us to understanding higher cognition, whereas what you’re doing right now is a much more mechanistic, circuit based approach.

But you’ve been interested in movement and behavior itself also for a long time, so it’s not like you’re just interested in higher cognition. So it doesn’t surprise me that much.

[00:14:38] John: Right. And also, I think when we first spoke, I always think that trying to understand things versus trying to fix things are quite different projects. Now, this is one where there’s a better subterranean connection between understanding the abnormality versus trying to fix it. In other words, trying to understand a healthy system versus trying to understand how an unhealthy system works versus fixing how that unhealthy system works is along a spectrum. And so, yes, and the projects at your own university right now at Pittsburgh, it’s extremely exciting, given this notion of the cortical spinal tract being the one ring that rules them all. And if you could bring the cortical spinal tract back online with all its potential targets, you could actually hit all the features at once of the hemiparasis phenotype. We’re trying to implant electrodes in the cervical know. This is work led by Marco Capagroso and Alvira Perondini and others and amplify the residual signal of the descending commands so that they can do their job better. So, in other words, very exciting. In other words, we had a paper early in 2023 in two patients showing that you could get quite remarkable return of function as soon as you turned the stimulators on, showing that there was this residual capacity in the descending system to do better than it was doing with the patient on their own.

So there’s a very exciting, I would say, confluence.

[00:16:12] Paul: Yeah. Sorry. Does it require that there be some remaining activity, that it’s.

I mean, it’s not completely ablated?

[00:16:21] John: I think so, in other words. But it’s interesting, right? It’s the same as in spinal cord injury. When people say, you have a total spinal cord injury, it’s never really total. In most cases, there’s some residual descending. Now, what you can do with that. But, yes, in other words, you’re assuming that you’ve got some minimal residual capacity that you have to try and kick Start and bring back Online, and then you have to mix that with behavior. In other words, this is what’s so interesting to me is with the work that I’ve been doing in stroke with video gaming, is you want to create the ideal behavioral platform against which you try and bring circuits online. So it says sort of like a double hit.

Bring it online physiologically, and then sculpt it behaviorally and do that simultaneously.

It’s a very interesting. I mean, that’s sort of where I’m at in terms of a monkey Model physiology, better dissection, especially in three D of the abnormality using 3d robotics and motion capture, and then trying to, in fact, do non invasive imaging tms, and just try and get a sense of this whole thing.

So, yeah, that’s something that I’m heavily involved in and find very enjoyable because it covers all the things that I like. Like you said, it’s patients, it’s recovery, it’s physiology, its behavior.

[00:17:51] Paul: It’s not higher cognition.

[00:17:53] John: No, absolutely not. I mean, I would say, like you asked me once way back, it’s because I’ve studied the motor system so much and seen patients with higher cognitive deficits and been very interested in the cognitive motor interface, which is the title I often give my talks because I’m very interested in planning problems, apraxia.

I’m very much interested. Some people have even called them, like Scott Grafton, the higher motor disorders, right?

And in my book back in 2017, I actually quote Sherington’s 1917 paper, a big paragraph where he and his colleagues, Layton and Sherington, comment on the separation between the consternation and thinking that the chimpanzee was doing versus its surprise that its arm wasn’t working. So it’s a very beautiful passage where they literally point out the separation between, hey, wtf? Why can’t this work? And keep sending a command down to get the arm to work.

When you see that passage that goes back over a century, it’s something that you see all the time in patients is this complete surprise at their lower level deficits when they’re completely. The most dramatic example of this, of course, is locked in syndrome, where the person has a basis, pontice lesion. They can’t move anything at all.

Right? And yet they can dictate whole novels in their heads, right? In other words, I always am a bit baffled by people who deny either that there is such a thing as higher cognition, which is we can have a conversation about that, and then the people who want to sort of concoct a sensory motor origin out of it, right? Yeah, sort of the four e story in some form or another.

I would love to be a motor chauvinist. I would love to say that the work that we’re doing on the sensory motor system is the core set of principles that everything else will launch out of.

But I’m a bit of a no free lunch theorem person. You’re just not going to get everything out of one place.

[00:20:16] Paul: But I think that we’re going to.

[00:20:18] John: Have to have a. Tell a different story.

[00:20:19] Paul: Yeah, no, I agree with that. But I’m not sure that you would find that many people that disagreed with you, let’s say, for example, within the four e community, the embodied, inactive, the four e community. Who.

[00:20:34] John: Are you joking?

[00:20:35] Paul: Well, when it really comes down to it, because there are frameworks to understand, right? So if you think of thinking as.

In terms of thinking can be thought of as internal motion, right.

If you go and you measure brain activity, it’s not like you’re actually measuring neural activity related to movements that just happens on the inside, but it’s more of a framework for thinking about how thinking evolved, because you have to analogize and model everything.

[00:21:08] John: Right. Well, again, this is a lovely segue into something else that I’m heavily involved with, which I think, if you don’t mind, let’s move into this. Right, so in other Thomas Ryan, I don’t know if you’ve ever had him on your show. I think you just.

[00:21:27] Paul: I just met him at SFN, too.

[00:21:30] John: Yeah. So, Thomas.

[00:21:34] Paul: Thomas.

[00:21:35] John: Francis Fallon. Francis Fallon. A philosopher.

Kevin Mitchell, Melanie Mitchell, celeste Kidd.

We’re all part of this, and I’m sure I’ve not listed everyone. I hope they don’t listen to this and think I’ve forgotten them, but the names will pop into my head in a minute, have a part of something called. It’s a representational working group at Trinity College, Dublin, where we’re working on the issue of representation in neuroscience.

In fact, we have an article that explains our project in the new magazine called the Transmitter, released by the Simons foundation, where we wrote about this project.

And the reason I’m bringing this up, other than it being just what I’m major part of, is one of the things that the four e people are denying when they get interesting. In other words, it’s a fight worth having, is that they’re antirepresentationalist.

In other words, the real issue when it comes to cognition, where the fight has happened for decades and decades, is representation, language of thought, symbols. That is something that the four e and I’m just going to say this broadly deny completely. Okay, as long as the caveat is.

[00:23:04] Paul: That you’re saying it broadly.

[00:23:06] John: Sure. But I mean, you can always do the Mott Bailey thing, which is to go, oh, well, this is the overall position, which is anti representationalist, is sensory motor, and then say, oh, well, there are some that will make some little qualification to squirrel out of being too extreme. I mean, I think that there are papers I’ve seen just in the last week that want to take the sort of gibsonian affordance, non representationalist embedded. I mean, it rears its head in all sorts of ways, and it’s basically anticognitivist.

And to go back to the beginning of what I was saying, you’re just not going to get to those phenomena from the sensory motor system. And just another point here, there are two kind of strains, right? There are the neuroscientists who want to sort of tell a biology of cognition, they want to get away from a psychology of cognition. Even when you had Max Bennett on your show, there was, oh, well, those are psychological terms, right? So in other words, there’s this idea that we’ve got to get away from a psychology of cognition, and we need a biology of, you know, things like what Mike Levin is doing, Pamela Lyons, that strain that surely that we can get some sort of life principle, some fristonian cellular principle.

So let’s go to biology and life, let’s not go to psychology.

And then you’ve got.

They go even further, like Carl, is that they think they might be a physics of cognition. In other words, when you listen to Carl, he’s really trying to be a physicist of cognition. And then in the light of that sort of basic biology, basic physics of cognition, the inactivists, the embodied people, are quite non revolutionary. They’re at least trying to be sensory motor, physiological. And then you’ve got people like me, old fashioned people like me, going that we do need a psychology and a cognitive science that isn’t sensory motor, it isn’t cellular biology, and isn’t physics. Do you see what I’m saying? So in other words, you can see this massive effort to go down to basic principles that allow you to get away from the human centric psychological view of cognition.

And that’s where we’re at. And of course, the deep irony of it, the final point, the deep, hilarious irony of it, is the current craze in AI is large language models which are entirely based on a human product, right? So in other words, the irony is just delicious. That after all this attempt to be ecumenical and talk about intelligence in animals and go down to other basic systems and be more sensory motor, and then suddenly, here we are dealing with human language, right. It’s just so deliciously ironic that this.

[00:26:22] Paul: Is where we’ve reached that the cutting edge of AI right now is based on products of human cognition.

Is that what you’re saying?

[00:26:32] John: I’m saying that it just turns out that the most impressive feats of claims to agi are a system that parasitizes does archaeology on vast quantities of human thought in the form of language.

[00:26:50] Paul: In the form of language.

[00:26:51] John: So, in other words, I’m just saying that. So the embedded story about intelligence in an amoeba or in a worm, it’s irrelevant, right. In other words, that’s interesting to me.

[00:27:03] Paul: That you would say that, because I know at the same time you are a quote unquote pluralist, right? So any sort of scientific questions should be approached from different angles and different levels, and they’re all valid. Some of them may be more valid than others. So, in that sense, you might need a basalcognition life story. You might need also the psychology story.

But the way that you’re saying it right now is like, one of those is the winner, which is confusing.

[00:27:39] John: No, I’m saying that don’t claim pluralism in the guise of reductionism. What they’re trying to do is to say, if we can find some core set of things that we’re allowed to call cognitive, and we can generate principles from those, then the rest is just a kind of extrapolation from those principles. In other words, the actual heavy lifting, conceptually, has been done at the basal level, and then the rest is just details. It’s like footnotes to Plato. It’s footnotes to the intelligence. So we’re going to go from footnotes to Plato to footnotes to Friston. Do you see? It’s that kind of idea.

And I find that completely nonsensical. And it’s actually very interesting, even with people on your show, and I told you about. I listened to your show with Max Bennett. I really enjoyed it. I think he’s made an amazing act of synthesis. But it’s fascinating, when you listen to that podcast, how often human psychological notions infect the discussion of the animal work.

There’s this notion, I don’t remember where it came from, of where surplus meaning leaks into things.

And so what happens, and my brother has talked about this, the difference between models and metaphors, where you don’t realize that you’ve slipped from a model of something to making it a metaphor for what you really care about.

And so what you find is that psychological, cognitive terms keep slipping in as metaphors to the discussion.

I heard it over and over again. Oh, the animal imagines whether it’s going to go left or right.

[00:29:30] Paul: Right.

[00:29:31] John: You hear that all the time.

There’s nothing in the data that support that idea.

[00:29:37] Paul: Right. Let’s say in a particular case, what would be a better way to use that? So vicarious trial and error was one way.

[00:29:46] John: Vicarious, that’s a real problem. But even there, that’s not proven.

[00:29:51] Paul: No, I know, but that’s what I’m saying. You have to use words.

[00:29:55] John: Well, you could just say it’s really know. I’m very close friend with David Foster, who’s done some of the most interesting work on this, and I was with him in Portugal, and we had long debates about this. And in fact, one of the pieces that I’m writing with Dan McNamy, a computational neuroscientist, is to say that because they have neural evidence for latent structure in a maze, and therefore can generate a policy that’s interesting.

However, it’s a step too far to then infer that they’re entertaining options before they go. In other words, that is a beautiful example of the metaphor of psychology. Oh, I’m imagining options. I’m looking at one and then the other, and I’m choosing, that is not the obvious and only conclusion to the data. So what I’m saying is that unless people are extremely careful, right, but you.

[00:30:57] Paul: Can’T be that careful.

[00:31:00] John: What was that?

[00:31:00] Paul: I think.

[00:31:03] John: You disagree.

What I call this is the cusp, is that it’s extremely challenging to do experiments in animals where there isn’t an alternative implicit algorithmic solution to what’s going on without having to invoke overt imagination, okay?

And that’s something that we’re writing about. It gets to the whole notion of internal models, the whole notion of simulation, the whole notion of imagination, and what I’ve argued in other pieces that I’ve written, when I wrote the review of Nick Shay’s book on representation and cognition, is this very interesting invoking of overt representation, imagination, simulation, when you don’t need.

[00:31:58] Paul: Right.

[00:32:00] John: But in humans, to your point, in humans, you can do it in a second, you can come up with an experiment where it’s simply impossible to explain the performance of the person without invoking overt representation. I’ve told you before on this show, even definitely on the learning salon with Ida close your eyes, walk through your. Imagine you’re standing in front of your house and walk through your home. Now, there’s just no other way to explain what you’re doing other than the fact that you’re overtly conjuring up your house and thinking about the paintings on the walls and the turns. Now, what I’m saying is that there isn’t a shred of evidence yet that any other animal species can do that, but because we do it all the time, you walk by a furniture store, you see a couch, you go, ugh, I love that couch. But it’s too big for my living room, right? You just know that it’s too big for your living room. You have a sense of the dimensions of your living room. You have a sense of the dimensions of the couch. You do some kind of thinking in your head that it’s just not going to fit. We do it over and over and over again.

Right? Now, the interesting thing is that a lot of the time, even us humans don’t, right. Whenever I’m in a hotel and I take the elevator to my floor and I take the twists and turns to my room, I’m certainly not imagining the hotel corridor or where my door is, rather the others. I’m just doing something much more mouse like. I remember a few landmarks. I remember that fire extinguisher. I remember that. And I basically don’t have to rise to the occasion of overtly representing anything, right? And I find my way each time. So, yes, you’re right. Human beings also are mouthlike a lot.

[00:33:37] Paul: Of the time, I would say most.

[00:33:39] John: We have this superpower, but we have this superpower. We can do imagination, we can do time travel.

I’m going to go back to Pittsburgh and do science. I’m going to not do it this year because I still have my podcast, but I think I’m going to do it next year when, for these reasons, I can imagine it would be more feasible.

That is a very different kettle of fish to finding your hotel room through landmarks when you get out of an elevator. And we just don’t know how to do it. We don’t know how that works.

And all I’m saying is. All I’m saying is let’s just accept that doing an amoeba in a dish, a paramecium, a worm, or looking at a large language model. And Aida Momentagera has done some beautiful work testing them at their navigation abilities, and they suck, right?

And just accept, as David Deutsch has also said, there’s something interesting about human beings and their ability to explain the universe and imagine it. And I’m just saying that at the moment, there’s no contender to explain that ability. And trying to explain it away through physics, basal cognition, or AI, is just a premature hubris, in my view.

[00:35:01] Paul: So do you think this is just. Now, I’m just baiting you, but there’s something exceptional about humans, right?

Do you think in terms, less in terms of a continuum among the different species in the known world, or is there some sort of human exceptionalism?

[00:35:21] John: Well, I mean, John Maynard Smith and forgetting, the co author, wrote a book about transitions in evolution.

I don’t think anyone in linguistics denies that human language is singular and does not exist on a continuum with communication. And in fact, many people even deny.

Even if you talk to my friend Paul Chisek, he’ll go, well, language abstraction. Yes, but let’s table that for now. Right? So, in other words, language is the one place where people will allow exceptionalism, but it’s the one place, and then everything else is on a continuum. It’s hilarious. And yet, when you actually look at the requisites. Beautiful work by ev Federenko and her team, showing that language and thought are quite distinct, and also showing that without the thought bit, you can’t use the language bit.

Right.

And you look know, Tom Scott Phillips and his know, showing that there’s a unique form of inference, ostensive inference, that you need in order to use language. In other words, language is proof of a unique cognitive ability, not conferring a unique cognitive ability. But the thing is, again, a lot of people aren’t even familiar with this work. Right. That’s one of the reasons why I want to set up this program with Melanie Mitchell, if we can, at SFI, is to just have more conversations between people who really do look at primate cognition, who really do look at Covid cognition, and really see what they do and cannot do, rather than just vaguely invoking them as proof of a continuum. Right. Let’s just get into it.

I think most people just don’t get into it.

[00:37:10] Paul: Yeah. Total aside. But how is it. It just struck me that you have a boss and your younger brother sometimes.

[00:37:18] John: How is.

I mean, you know, David and I have converged a lot in terms of our interest, and I think know lives in this oscillation on this topic of intelligence, where he’s absolutely not human centric. And I think he does differ to me from sort of cognitive science, psychology. He does believe very much that there should be general principles, but he also agrees very much with emergence.

He very much believes that you can have discontinuities.

An emergence has two meanings, right? One is sort of these new properties from an aggregate, but it’s also what sort of explanatory framework is needed? And do you need to go under the hood, as he says?

So I think there’s a very interesting tension which he enjoys between looking for general principles across the continuum, but also recognizing in emergency there are discontinuities. Right? And the question is, and the point that he always makes, which is very interesting, is that we do not know ahead of time when we should and when we should not look under the hood when there is a continuity. And when there isn’t a continuity, there is no general principle of recognizing a system as being one or the other ahead of time.

Right? So the example he always gives is to understand the boiling point of water, you need to know about the molecular structure of water for fluid dynamics. And nevier Stokes, you don’t?

Right.

And so it’s an open question. And he will admit to me, I think, I don’t think I’m putting words in his mouth that he says, it may well be true that there are discontinuities somewhere between chimpanzees five, 6 million years ago and all the hominid species in between, which we don’t have access to and us, that a discontinuity occurred.

And as I mentioned all the time, don’t confuse substrate continuity with functional discontinuity.

The example that Tom Scott Phillips always gives is feathers. Right? Feathers for flight and feathers for thermoregulation look the same, but there’s no continuity between flight and thermoregulation. Those are completely discontinuous functions for a substrate that is completely continuous feather structure.

[00:39:46] Paul: Yeah, right.

[00:39:47] John: So that’s another big problem, I think, that people in biology and neuroscience make is that they think that substrate continuity implies functional continuity. It does not.

[00:39:55] Paul: Do you think the recent handful of years discovering new intelligences in different species, like crow, tool use, et cetera, the more that we learn about a given species, a slime mold can solve a maze. And what does that mean? Just zooming out? It sure looks continuous, even function wise. It doesn’t look discontinuous to me.

[00:40:20] John: I don’t think so. In other words, I think that the great success of reinforcement learning in the biological world has been model free.

Right. There are many, many model free ways to engage in intelligent behavior.

In computer science, there’s an entire model based approach to RL.

The problem has been to try and fuse the model based approach in computer science and look for its analogues in the animal world.

And that’s why I think it’s been a failure. It’s a false friend because it doesn’t.

[00:41:05] Paul: Exist in the animal world.

[00:41:06] John: Doesn’t exist. I don’t think it exists, no.

So they keep trying to find it. And then what you find are sort of clever model free kludes that get the job done, successor representation, things like that, and others.

The example I’m giving and I’m writing right now is in this long term book, but is take an alien watching someone playing pacman.

And you were looking at a Pacman game and there was the human playing as Miss Pac man and the computer was the ghosts.

And the ghosts were chasing after Miss Patman. And at times, Miss Patman chases after the ghosts. And you were watching as an alien. You’d be quite correct to say, look, there’s agency in the ghosts and there’s agency in Miss Pacman. And they’re both intelligent, they’re goal directed. Get Miss Batman, get the ghosts and the little dots, or whatever they’re called.

But the fact is, and the point I’m making is there isn’t a remote algorithmic overlap between the way the humans were playing it and the way the computer is. So it’s not a continuum, it’s a discontinuity that looks like. Is showing you that the task can be solved in two completely different ways.

Right. But to come to the premature conclusion that because to me, it looks like they’re both running and chasing and eating, that they’re therefore doing it the same way is idiocy.

[00:42:40] Paul: Could you say that? Right, so then just sticking with this species, right, is there humans and everything else, or are there discontinuities between species as well?

[00:42:54] John: I’m sure there are discontinuities between species, too.

[00:42:59] Paul: Large ones between plants and animals, perhaps.

[00:43:02] John: Yeah, absolutely. And Thomas Ello has written a book recently came out, I think, last year, talking about these transitions. Very much the sort of story that.

[00:43:12] Paul: Max gave the hierarchical control book.

[00:43:17] John: Yes, very much this. The evolution of agency, I think it’s called. I think that’s right. But basically everyone’s telling this step story.

It’s kind of ironic, right, is that everyone picks their own discontinuities and then try and nevertheless to tell a continuous story. Well, yes, there are these new functional capacities, but it’s on a continuum. So in other words, how are you doing that? How are you managing to have your cake and eat it? Right?

So I think, yes, but humans are a kind of metastatic cognition. They’re kind of a cancer cognition. They did something that is going to destroy the planet. So you could argue that human cognition is a step too far. It is some horrible discontinuity, just like cancer is a discontinuity from normal cell division.

[00:44:14] Paul: That’s such a pessimistic viewpoint.

[00:44:18] John: Well, I mean, look what we’re doing.

We are the only species that will probably engineer our own extinction.

[00:44:25] Paul: Well, that’s the pessimistic part, because I understand that. Look at the global warming data. Right. And it’s hard to argue against that, except that on longer timescales, looking back, let’s say we advance far enough, quickly enough that we can solve the problem, right? So we’re eating ourselves out of our own houses, but we need to eat that food to be able to think how to build a new house.

As a bad analogy, and there is that potential race going on, right? Are we smart enough to barely see?

[00:45:01] John: Yeah, but what you’re saying, basically, I.

[00:45:05] Paul: Think it’s hubris to think that we’re smart enough to destroy ourselves. I mean, I understand that we are.

[00:45:11] John: Because I could argue back that it’s hubris to believe that we’re smart enough to save ourselves.

But all I’m saying is that if you were to take a look at the earth from space, I mean, David Deutsch has a great example of this. Right? I mean, David Deutsch gives an amazing example, which made me laugh. He said, in every other planet, if an asteroid is coming towards it, the laws of physics will mean that it will be attracted and will collide like it did with the great extinction. Right?

But if you were watching Earth and humans found a way to repel an asteroid, you would be seeing one little planet with a superpower that defied the laws of physics, where an asteroid actually didn’t collide with the earth. And the point that David Deutsch makes, which is so clever, he says that’s human understanding acting as a force opposite to physics.

[00:46:15] Paul: Understanding in terms of control. In that respect.

[00:46:18] John: That’s right.

There is an ability on that planet that understands the universe, and there is a force counter to physics forces called understanding. And it’s so brilliant, the idea, because he’s absolutely right, that you actually see a causal consequence of understanding that only humans have, is that the asteroid doesn’t hit the planet.

It’s so clever.

It’s basically not an analogy. It’s actually true.

[00:46:48] Paul: It’s an understanding. Yeah.

[00:46:50] John: Right. So, in other words, you’re seeing another force in the universe repelling an asteroid right. In its understanding. And all I’m saying is, that is amazing. And he’s right when he also agrees that there’s this ability to understand and explain features in the universe and therefore have a causal effect on them. And that’s fascinating, but studying a Cl gowns isn’t going to tell you how that happens.

[00:47:13] Paul: Right, but the irony is, we deflect the asteroid, and then the following week, we all die because we’ve killed ourselves.

[00:47:21] John: No, exactly.

In other words, all I’m saying is that there’s something unstable, like cancer, about this form of cognition that evolved with the same substrate, and it’s just very difficult to know how to get attraction on it to explain it, in other words. And you could say, and I’m sure you will, oh, so here we are now with cognition that can repel asteroids. Surely we can use this cognition to understand cognition itself. Perhaps we will.

[00:47:52] Paul: Perhaps we will.

[00:47:53] John: But I’d rather we.

[00:47:54] Paul: I don’t know.

[00:47:54] John: Or perhaps we won’t. But I’d rather we accepted the size of the thing that we need to explain rather than, as we’ve been discussing, these bizarre attempts to diminish it, explain it away, deny it.

That is what I find very, very strange.

[00:48:12] Paul: It’s perhaps because we try to, and maybe this is the reductionist approach, and we can move on in just a second, but maybe it’s because we do understand everything else as reflections of what we think about ourselves. Right, because everything else is below us, quote, unquote. And so a simpler version. Right? And so if we’re this complex version, then we have a higher probability of explaining everything else in our own terms that we’ve invented through language and psychology, et.

[00:48:51] John: At one point using that language. And it’s this notion of sort of surplus at some point in your podcast with Max.

Well, yeah, no, but it is a recency effect. But I think it’s actually important because there is a link between the conversations in a way, right? Evolution of intelligence, AI. And, you know, this idea. Oh, well, the same template, I think, at one point is said, but then more complicated things are done with it. So, in other words, all the weight of the extra is, in the term, more complicated, more sophisticated.

Right? So, in other words, you go, I don’t know what I’m jamming into that word. It’s basically this, but a little bit more complicated.

But this is where language just makes you get out of jail free, which is just. You’ve actually not said anything very interesting by saying it’s the same thing, basically. A bit more complicated. Now, you’ve put all the explanatory weight on the word complicated, and then you go, well, tell me exactly what that means. More complicated.

Is a computer just a more complicated calculator?

Right.

Is a plane just a more complicated car?

Right.

You can get away with everything with squirrely words like that. Do you see?

[00:50:11] Paul: Okay.

[00:50:12] John: And yet it’s very, very hypnotically sort of comforting to be able to do that.

[00:50:18] Paul: Explain away via complication.

[00:50:20] John: Yeah. Words like that, it’s the same, but it’s just a bit more complex. A bit more complicated. Right. And maybe it’s just my problem, but I just don’t find that very satisfying.

[00:50:32] Paul: Sure. Yeah.

I’m not sure that we’re ever going to be able to explain ourselves via ourselves. Maybe we need aliens to actually help us do that. Right. Tell us.

[00:50:47] John: Again. In philosophy, there’s this notion of reference, and David Barack really taught me about that and brought my attention to it, which is, let’s at least agree that there’s a problem in need of explanation and not do that thing, which is very sort of. You know, you don’t like psychoanalysis because you have a problem, right. And just say, look, just because we’re recognizing a phenomenon doesn’t mean that we’ve already decided on the theory for that phenomenon. And I sometimes feel that when it comes to cognition, you’re not even allowed to say that it’s a thing in need of explanation, because people will accuse you in invoking it. You already are committed to a theoretical framework, and therefore you’re not worth arguing with. Now, I find that, simply, in all seriousness, I find strange. I think there is something in need of explanation.

We can be ecumenical and pluralistic about trying to understand it, but don’t just deny that invoking it is already a theoretical misstep.

I don’t understand that move, and it could well be that I will turn out to be wrong. But I don’t think that’s a very healthy starting point, which is to go, oh, what is cognition? What is thinking? What is intelligence?

You’re just concocting terms that are getting you into trouble. It’s like the ether.

I don’t think cognition is the ether. I don’t know what you think, but I don’t think that’s right.

[00:52:22] Paul: Wait, what do you mean that? Can you expound on that a little bit before I agree or disagree?

[00:52:28] John: Well, people in physics invoked an ether, right, to explain the propagation light, okay?

And it was a completely nonexistent entity. Right. It’s like Flegiston. I think it depends on how you.

[00:52:42] Paul: Define it, because there is an ether, if you define it the right way.

[00:52:45] John: Well, I don’t think so. I think things like Flagistan and ether don’t exist. And I think that just like life force vitalism, there are many things that invoke entities that just don’t need to be invoked. They basically can be discarded. And I don’t think that the notion of thinking and cognition can be like that, that you can just tell a more pragmatic, embedded sensory motor story. You can go back to more basic principles, and you don’t have to invoke anything extra. It’s a little similar to, if we want to, we can move on to sort of this recent work we’ve done about there not being anything like cortical reorganization.

[00:53:34] Paul: Yeah, let’s move on to that, because to wrap up, I feel an affinity and a soft spot for that four e approach. And I think because it is grounded in things that I feel I understand better than things like, quote unquote, higher cognition, ether or not.

And I appreciate it from a pluralistic perspective. Right. And I also appreciate the physics of life and approach. And the biology of intelligence is, I think, what you refer to it as.

[00:54:09] John: Please don’t get me wrong, so do I think that work is fantastic?

I think biology is amazing. All I’m saying is there are three ways to look at it, just to finish. There’s the biology of the system, which deserves to be studied on its own grounds, because it’s amazing, right? The incredible work that people like Mike Levin are doing. I mean, just incredibly interesting experiments, right? And all the people doing amazing work on animal intelligence. I mean, it’s just glorious science, right? And it shows you all the different ways that you can have the algorithms that biology has come up with to do intelligent behavior. Right? Of course, the question is, when do those become models for something else? In other words, when is one animal a model for another?

And I’m just saying that people like Michael Katz and others have argued very beautifully that we don’t think strongly enough about what we’re talking about from an evolutionary standpoint when we claim that one animal is a model for another.

Okay? And I don’t think there’s anywhere near enough actual overt discussion about why you’re allowed to claim that this animal is a model for another.

And I think that it’s bit strange to see a mouse as an intelligent creature in its own right versus just seeing it as a little stepping stone. Towards primates. Right. And there’s slippage between those two ways of seeing the mouse.

And then finally, third, it’s the metaphor problem. In other words, you go from the animal in its own right, being intelligent. Fascinating, claiming that it’s a model, whatever that means. And then the worst is when you don’t even. It’s not even a model, it becomes a metaphor where you start saying, oh, the mouse is imagining this, imagining that.

Right. And so I am not in any way critiquing all the different ways that science is being done on intelligence. What I’m worried about is when conceptual slippage occurs and people aren’t even aware that it’s happening.

[00:56:10] Paul: Valid.

[00:56:10] John: Right.

So do you see what I mean? Absolutely.

The last thing I want to do is come across as going right in the court of research. This should be allowed and this shouldn’t. I mean, that is absolute disaster to do that. So I don’t want to come across there. It’s the conceptual slippage and the jumps that are made that are never overtly admitted to.

[00:56:34] Paul: Okay, fair enough. So different parts of our brain don’t reorganize themselves to take over different functions. Instead, those functions were always latent and available and now get to be.

[00:56:52] John: Well, it’s very know, I have an amazing colleague, she’s professor at Cambridge, Tamar Macon, and she’s done, I think, some of the most extraordinary sort of work, looking at these phenomena of dramatic plasticity in cases of disease or damage, amputees, for example, tool use.

And I was doing work, as you know, on recovery of function in stroke. And there’s been a lot of seminal work saying, oh, well, when you lose this part of the brain, an adjacent region takes over. And Ramashandran seemed to have a paper in nature every couple of months showing some dramatic example of the brain’s capacity to reorganize. Whole books get written about the amazing plasticity of the human brain and interestingly, classic experiments, whether it’s Magunka sir’s incredible ferret experiment, and Huber and weasel and on, and people sort of remember these experiments in a way that actually isn’t the way that they were reported or even the conclusions of the authors themselves.

Right. So in other words, what Tamar and I did, coming from different areas, is that we joined forces and spent almost three years just carefully going through these seminal papers on reorganization, and that the occipital cortex and the blind becomes used for language, and a hand area becomes a face area and all that kind of stuff.

[00:58:33] Paul: That’s the classic story.

[00:58:35] John: It’s just not true.

Right.

And the thing is, the term reorganization is used, okay?

To invoke something special. In other words, we don’t usually say, when I learned how to play chess or I learned French or I learned table tennis.

We don’t usually say, in that case of health, oh, John’s brain reorganized for table tennis or for French. So reorganization is a term that’s saved for the more dramatic instances where a limb is lost or a stroke happens or you’re congenitally blind or you’ve had a hemispherectomy. It’s almost as though there needs to be a special form of plasticity to live up to the dramatic event itself and the behavioral recovery itself.

Reorganization.

[00:59:26] Paul: You mean physical?

[00:59:27] John: Yeah.

[00:59:27] Paul: When you say it, you mean physical reorganization.

[00:59:31] John: What we said is that you are claiming that there’s been a qualitative change in the computation performed by a region. So it was doing function a before, and it was just co opted and repurposed to do something else. And what we’re saying is it’s simply not the case.

[00:59:48] Paul: Doesn’t necessarily have to be the case or is not the case.

[00:59:52] John: Is not the case. In other words, if you look at the data, it was an Occam’s razor kind of approach.

Then you can use pre existing notions of synaptic strengthening, input agnostic, computational ability. Let me give you an example what I mean, right? In other words, it would be very od to say you’re in a hotel and you don’t know the room, and so you have to make sure that you don’t stub your toe on the bed and you use vision to navigate around the bed. Okay, it’s night and there’s been a power outage, and now it’s dark and so you can’t see it.

And you go, wait a minute. I remember where the bed was in this room, and I’m going to navigate around it with memory instead of vision, but you’re still using the same system to walk around the bed. So you basically have done a navigation computation using two different forms of input. It could be memory input or it could be visual input.

So there are many, many examples where you can be sensory input agnostic to the computation you tend to want to do. So what I’m saying is, if you look through two basic lenses at the work that everyone thought needed the invoking of reorganization.

One is unmasking of latent ability that was always there and upregulating it, combined with regions that are input agnostic in terms of the computations they do those two principles alone can explain all the dramatic results?

[01:01:31] Paul: Can, but that doesn’t mean that they necessarily do.

[01:01:36] John: But I’m just saying that when you look at the actual data, you don’t see, we go through it. We’re not just.

[01:01:41] Paul: Yeah, you don’t need to conclude.

[01:01:42] John: Just be very clear.

I’m just saying that we concluded that it was upregulation plus promiscuous computational ability.

That’s what those results show. In other words, it’s like the ether, right? You can invoke some special thing where a region of the brain can simply have some generic canonical, lego like property where it can basically be reconfigured.

But even the most dramatic examples.

[01:02:16] Paul: Yeah, that’s what I was going to ask. Perinatal stroke, were there compelling cases that would suggest reorganization?

[01:02:21] John: Well, I mean, let’s take that. So the most compelling one was the artificial one, where Magunka Cerna’s team took baby ferrets and rerouted lateral genetic nucleus output to a one.

[01:02:34] Paul: Yeah. So that typically goes to visual cortex. And if you reroute it. Go ahead.

[01:02:40] John: And basically they showed what looked like orientation. Selective patches. Right.

But what they actually tested behaviorally was just signal detection.

And if you look at the actual conclusion of that paper, they say that what you’re probably seeing is a more generic computational ability of primary modality cortex that can do a similar thing on basic inputs.

That’s the conclusion of those authors themselves.

Now, if you look at sort of the perinatal stroke case where kids can have lose the entire left hemisphere, and in a subset of those people, you can see language take over in the other hemisphere work by Alyssa Newport and others and many others, they also conclude that it’s not reorganization because that area of the. It’s exactly in the homologue that it happens. It’s not in any other area of cortex. It’s in the homologue.

Right. It’s in the mirror image structures with the same input and output relations, the same footprint fingerprint, as Richard Passingham calls it.

Right? So in other words, there is a capacity already in the homolog. And we know that from studies, even in adults, in people who haven’t had hemisterectomies or perinatal strokes that you can bring online in normal adults, the nondominant hemisphere, for linguistic purposes. So it was always there. What would be really weird is if the prefrontal cortex or the occipital cortex suddenly became language cortex. Right. But it doesn’t. It doesn’t.

[01:04:33] Paul: Maybe we haven’t seen dramatic enough examples of that.

[01:04:37] John: So, in other words, what I’m saying is, all I would say to you, Paul, is read the paper from beginning to end, example after example, and I would challenge you to be able to really mount a credible argument. And then I would simply ask you, why are you. In other words, is it because there’s a cherished desire to hold on to this notion?

Or is it because it really still is the best, most parsimonious explanation for what’s going on? And all I’m saying is we don’t have an axe to grind. It’s just that it. And the very fact that you’re having. The response you’re having reinforces, I think, for me and for Tamar, that it was worth writing the paper.

[01:05:24] Paul: Yeah, I look forward to reading the manuscript, which I have not looked at, as you’ve mentioned.

[01:05:29] John: Yeah, it’s fine. No, but I’m just saying that it’s not. Oh God, here’s john and co trying to undermine.

It’s just that everything deserves a second look, if not a third look. Do you know what I’m saying?

[01:05:44] Paul: Sure.

[01:05:46] John: The other thing is these notions of Adrian Hayth is working on a very interesting paper on internal models simulation, which goes back a little bit to the conversation we were having.

There are many cherished ideas that continue to be used, not because they are likely to be what’s really going on, but because they are very amenable to formalism. I think model based reinforcement learning is really an example of that, where it’s just so much fun mathematically that it holds far more sway than over what may be really happening in animals.

And I don’t know why that. I think that. So that’s basically, we need to sort of have conceptual frameworks that are really fun to live in, which is arguably what Freud did. Right. Freud constructed a conceptual framework that you could think about everyday life in. It was fun. Right? The same with astrology. Right? It’s fun. Say, oh my God, he’s a Leo, she’s an Aries. Right. It somehow helps us navigate complex things by having.

You know, I’m not claiming that reinforcement learning is like astrology or psychoanalysis. All I’m saying is that the desire for premature closure with conceptual frameworks is something that we should fight and always try and break those models rather than generate existence proof for them.

Right. And I think there’s some weakness. I don’t know where it’s come from, where instead of trying to break things, people are trying to prove things.

[01:07:34] Paul: Yeah, well, I think it’s a lot lower barrier to thinking if you accept a framework, right? Instead of always questioning the framework that you’re working from. I was going to ask you this broadly then, directly related to the manuscript that you were just describing. So when I interviewed for graduate school, I had been a technician for a couple of years and I did some slice mouse visual cortex plasticity work where you stimulate with a certain protocol and try to induce plasticity. You put a finpridil or drugs on there to reduce NMDA mediated plasticity. Anyway, and at the time, and I think still is the case that all learning is plasticity, synaptic plasticity, even though work like in recurrent neural networks, shows that you can have non plasticity learning. Anyway, during an interview for my entrance into grad school, in the interview process, the faculty member said, so what do you think? Is plasticity important for learning and memory? Sort of like taken aback a little bit, because I didn’t really know what to say at the time. And I said, well, I should hope so, because there’s been a lot of work done on it. So anyway, my question to you then, John, is given, going through the research that you went through, how important is synaptic plasticity in the brain of, let’s say, adults?

If that’s too ridiculous of a question for you to address?

[01:09:08] John: Yeah, no, it’s not ridiculous. I think obviously, from the point of view of things that I care about in my everyday lab life, motor skill learning and recovery from brain injury, I mean, it’s undeniable. I think that strengthening of connections is fundamental, right?

Where I think it’s much more interesting. And I defer to the incredible work that’s beginning to question synaptic basis for memory, for example. And you’ve had some amazing sort of podcasts about know, and you had my great friend David Popalon right, where I think this is an incredible example of where the regime of importance in the brain for things like cognition and memory versus the regime of use of substrate for sensory motor behavior may be different, but wouldn’t it be interesting if the same substrate was configured and used in a slightly different way for these different functions? And it’s maybe. I mean, and you know, people are claiming you don’t need neurons for all sorts of intelligent behavior. So it leads to the question, maybe you need neurons when you’re an elephant trunk or a giraffe neck or long legs with axons going all the way from the brain to the toe, and you need muscles to contract and all that. But when it comes to the regime for remembering things or thinking about things.

You’re inside the brain. You don’t have to go all the way down to your toe. Maybe it’s a different regime. And I think that’s what’s been very interesting about this discussion opening up is it’s a little bit banal to say, oh, synaptic plasticity is important.

It’s too short a sentence to sort of convey the complexity of the issue. And it’d be like, you know, you live in Pittsburgh.

Explain Pittsburgh to me.

Right.

[01:11:23] Paul: Actually, that wouldn’t take that long.

[01:11:28] John: So I think that it’s obviously a property of the nervous system that’s been exploited of huge importance to the kind of work on motor learning and on recovery, whether it’s the cinequinone and the locus, to think about memory and cognition, especially, as you say, there are many examples where you can have these effects know, like hierarchical reinforcement learning, the wonderful work by Matt Botvinik, where basically you can have these things happen without having to invoke synaptic weight changes.

I don’t know, I’m rambling on a bit here, but what I would say is important for what?

Right, and what specifically are you asking about? Because there are many other properties of the nervous system that we should be interested in. Also, Earl Miller, we had him on the learning salon and the whole issue of oscillations, and I think he made the point really well, which is, why are we not sending out all our hounds? We don’t know what regime is used by the nervous system. Maybe evolution, as we know, is incredibly good at squeezing out function from all sorts of features. And maybe the nervous system has features at multiple scales, from oscillations to snare synapses. And we don’t know under particular behaviors which scale in regime is the one being having the most weight put on it. Pardon the punch. Do you see what I’m saying?

And this is what I meant, going all the way back to the beginning of the discussion with you. The great value of your show is to have open discussions with people saying, you know what?

This story is far from over.

And the synapse hegemon, right?

We need to release ourselves from its shackles a little bit and breathe a little bit and think about all the potential options. And that’s why it was so interesting to have Earl on and to have people like Sam Gershman and others who are saying, wait a know, let’s rethink.

So I think it’s a very exciting time to sort of just open one’s eyes a little bit. And that’s why I’m in no way with our reorganization paper saying that synaptic plasticity doesn’t exist. Of course we’re not. We’re saying, in fact, that vanilla synaptic plasticity is sufficient to explain phenomena that seem to require something special. And you don’t need to invoke it.

[01:14:11] Paul: You just said that it’s a wonderful time or an exciting time. And it struck me that I was having somewhat the opposite thought, that it’s an even more daunting time, because if you have the ability to squeeze out functions from different combinations and levels of biological activity and regimes, then it becomes harder to tell a nice, clean story. Right. You become less confident that you’re even looking under the right lamppost at it for the keys.

[01:14:43] John: Right.

[01:14:43] Paul: At any given moment.

I’m not sure what you think about that. I mean, I agree that it’s exciting. I also think, oh, now I need to account for the different combinations of oscillations and neurotransmitters and plasticity and circuitry and hierarchical properties. So there’s just a lot to account for in the dimension of explanation.

[01:15:15] John: But it’s interesting that what you’re doing in your description of the frustration of it all is you’re going all the way back to kind of, and Nick Krieger’s quarter has also talked about it, which is, maybe you should start with the behavior, the function, the task. Do a task level analysis, and then start worrying about the details of how it’s done. So here’s an animal. Its task is to transport itself from a to b, give it body shape.

[01:15:43] Paul: But you still have to adjudicate.

[01:15:45] John: Eventually, you have to adjudicate in our neuroscience needs behavior paper, we tried to see that there was an arrow of doing it, which is, what’s the behavior, what’s the task that needs to be done, what’s the function? And then have an algorithmic description of it, and then break the tie with implementational work.

I mean, I still feel even, what is it now, seven years later, whenever that paper came out, that there was still a logic of going in that direction. Right. I think bypassing it by going to simpler and simpler systems where you get the circuit immediately. Right. Sort of the sort of, we mocked a little bit in that paper, the sort of computational, mechanistic, cognitive neuroscience, the genelia sort of website with Vivek where the idea is, well, if you get to a simple enough system and you already are at the level of the implementation, and you’re allowed to call what’s happening there cognition. You can short circuit that sequence that we wrote about in that paper.

But I don’t think that you can bypass that sequence by getting to a system that a single cell or an insect and say, look, I’m going to be sharing tony in on cognition because I’ve got such a simple system, and I’m allowed to call what it’s doing cognition. And then I’ll extrapolate. Right.

That’s, I think, the alternative approach. What we’re going to do for cognition in an insect, what Sherrington did for reflexes in the cat.

I don’t think so. And I think one of the interesting things that AI is giving us, and I think Sam Gershman on his Twitter feed has made this provocative point, is how much is neuroscience really contributing to our insights about cognition, that we’re not simply getting at the task level by doing AI, right? And I think there’s been some objections to him, but I tend to agree with his position, which is that task analysis, psychological cognitive science, and then programming it into AI is doing more than circuit analysis for the higher level phenomenon.

[01:17:56] Paul: Doing more to what?

[01:17:59] John: More progress, I think, towards the understanding of cognition is going to happen through a combination of cognitive science, behavioral analysis, task analysis, and programming computers than doing circuit analysis on lower organisms and then extrapolating from them.

[01:18:15] Paul: Do you think that same statement applies to wanting to understand the biological basis of cognition, or are you just allowing cognition to be its own function?

So, if you’re implementation level agnostic, I completely agree with you. But if you’re interested in, let’s say, the brain and how the brain does it, does that statement still hold?

[01:18:37] John: Well, it comes back all the way to what my brother calls emergence is when does the understanding require looking under the hood?

Right. And what you’re implying is, it may well be that you need to look under the hood to understand cognition, but maybe you don’t. Maybe just like the example he gives in a talk he gave last year.

[01:18:59] Paul: Who’s this?

[01:19:00] John: If you want to understand my brother, when you want to understand Fermat’s last theorem, how it was proven on the page, pencil and paper, it’s irrelevant to the truth of Fermat’s last theorem, what the state of the brain of Andrew Wiles. So, in other words, there are situations under which the explanatory framework is autonomous from the lower level. It’s screened off, right? And so all I’m saying is, it may well be that the principles of cognition that laid us to think we’re understanding how the human brain do it. May borrow from research done in AI where a more substrate independent conceptual framework can apply. That doesn’t require you to look under the biological hood. Now, I’m no way am I claiming functionalism all the way, but I think that one could say that AI is showing to us that functionalism can do.

[01:20:00] Paul: Quite a lot if it’s based on what was known about brains 7000 years ago already.

[01:20:07] John: But I’m not convinced that the advances in AI that lead to more AGI like effects, and I’m a skeptic about it, but you’d have to be a little bit of a strange extremist to say that there isn’t a possibility.

And that’s another discussion as to why we might not get to AGI for biological reasons. Just, I do want to make a final point, and I had this discussion with David Chalmers.

I’m very sympathetic to the idea that you need consciousness for system two thinking, and I was also talking to Antonio Demarzio about this briefly in Portugal.

And it may well be that consciousness, like pain, is biological substrate dependent. So ironically, it may well be that for the type of cognition that you and I are doing on this podcast, you can’t be functionalist in just algorithmic. You have to combine functionalism with biological substrate to have things like consciousness. In other words, I really want to make it very clear I may end up not being a pure functionalist once it comes to overt aware cognition, which you think.

[01:21:31] Paul: So it would be necessary to get to AGI may or may not be necessary to get to at the moment.

[01:21:38] John: If I had to put a stake in the ground, I would say that AGI will require system two, like Yoshua has said, mengio and others.

And it may well be that you come full circle, and to get there, you’re going to need biological substrate, and then you can’t be functionalist. But when it comes to algorithmic intelligence, system one, you can be functionalist. Do you see what I’m saying?

[01:22:02] Paul: Yeah, I do, and I think that I agree with you. If I had to put my own stake in the ground as well, I mean, part of the exercise of learning more and more about cognition for my own sake is actually going away somewhat from functionalism for things like higher cognition.

[01:22:21] John: When you read, know, I don’t know if you have ever had mark solms on the show. No, not yet. His book, the hidden spring. Now, I vehemently disagree with where he goes at the end of his book, but it’s really good in terms of how it covers. He does an amazing thing where he combines the drives of Freud with all the drives that are brainstem dependent in neuroscience and saying that without those, and it goes all the way back to the know the difference between the passions and the reasons, and maybe you need them both. Right. And when I was speaking to David on the show about that, he said, oh, john, that’s very dichotomous of you. Right.

[01:23:03] Paul: Just what I was thinking.

[01:23:05] John: And all dichotomies end up being simplistic.

True.

But I am not a functionalist all the way to these kinds of discussions and how the brain is doing it.

Because in the end, an AI is going to have to care to have a discussion where there’s no obvious cost function. I mean, what’s the cost function of this discussion you and I are having? What are we optimizing for? Right? It’s so open ended.

Right? So in other words, it’s almost as though open endedness is the feature that we have. Kenneth Stanley would like open endedness. Who would?

[01:23:45] Paul: Ken Stanley.

[01:23:48] John: Yeah, I know Ken Stanley. Yeah.

So I don’t want to come across as sort of a sort of strident functionalist all the way up. Far from it. So in other words, in terms of my own trajectory, it’s like very substrate dependent. Work on stroke recovery and hemiparasis through functionalism to a belief again, at the other end in substrate dependence. It’s almost dinosaur shaped tail, big body, and then long again.

But I hope that doesn’t sound completely incoherent.

[01:24:18] Paul: No, it doesn’t to me, but who knows how it’ll sound to the listeners?

All right, well, here, let’s end the episode.

[01:24:26] John: Listen. Absolutely. And look, that was great. Hijack. I enjoyed it. I hope it was okay for you.