Brain Inspired
BI 176 David Poeppel Returns
Loading
/

Support the show to get full episodes and join the Discord community.

David runs his lab at NYU, where they stud`y auditory cognition, speech perception, language, and music. On the heels of the episode with David Glanzman, we discuss the ongoing mystery regarding how memory works, how to study and think about brains and minds, and the reemergence (perhaps) of the language of thought hypothesis.

David has been on the podcast a few times… once by himself, and again with Gyorgy Buzsaki.

0:00 – Intro
11:17 – Across levels
14:598 – Nature of memory
24:12 – Using the right tools for the right question
35:46 – LLMs, what they need, how they’ve shaped David’s thoughts
44:55 – Across levels
54:07 – Speed of progress
1:02:21 – Neuroethology and mental illness – patreon
1:24:42 – Language of Thought

Transcript
[00:00:00] David: You if if there’s any theme coming out here over the it’s, it’s that the memory story is so foundational to no matter what aspect you’re in neuroscience or cognitive neuroscience, we have to get a grip on it one way or the other.

Again, just because people wrote it 800 years ago doesn’t mean it was idiotic. It was people who thought very carefully about thinking.

Science has become replaced by engineering, or basically, at least in my area, cognitive neuroscience. Science is now correlation and regression, and that’s engineering.

[00:00:46] Paul: This is brain inspired. Hey, everyone. I am Paul. My guest today is becoming a recurring character on this podcast. David Poppel has appointments all over the place, but he runs his lab at NYU, where they study auditory cognition, speech, perception, language, and music. So says his lab website. So, like I said, David’s been on the podcast multiple times in the past once by himself, once with Yuri Bujaki. And the reason why he’s on today is because a few episodes ago, on episode 172, I had David Glansman on the podcast, and David Glansman came on to discuss his work, trying to show that memory is stored not between neurons at the synapse, which is the established story or dogma, but rather that memory is stored within neurons, likely in a more stable form in the nucleus of the neuron. So a week or so after that episode, david Poppel popped into my email appreciating david Glansman’s work, and reiterating how important it is for neuroscience to figure out something so fundamental, how memory works in the brain. So we discuss that and surrounding topics. We discuss similar things for language, which is one of David’s focuses. And we end by talking about David and Nina Kazanina’s recent reexamination of the idea of what’s called the language of thought. So the language of thought is a poorly named construct because it doesn’t necessarily have to do with language. Actually, it doesn’t really have to do with language at all, but it’s the idea that our thoughts must be governed by some orderly, logical structure and rules. And David and Nina show how, in principle, neuroscience already has some of the data suggesting that a language of thought is possible and should be studied at the neural level. And besides that, we just talk about how David thinks about brains and how he thinks that brains and minds should be studied, which we’ve talked about a little bit before, but we go over again and in more detail. And I think it’s important to have people like David in the field to remind us that it’s important to think critically and think deeply about these topics. Okay. Show notes are at braininspired co podcast 176. Thank you to all my Patreon supporters. You guys make my world go round. All right. And here is David Popple, the reason we’re talking today. Welcome back. Good to see you, by the way.

[00:03:23] David: Thank you. Good to see you. I’m glad to have a chance to chat and see if we can cause more confusion or make more problems for someone else.

[00:03:32] Paul: Well, this is kind of an impromptu discussion because I recently had David Glansman on and talking about memories as being stored within the nucleus internally, memory as a molecular process. And this prompted you to reach out. And I had forgotten, actually, because I think of you as the language and rhythms and the linking hypothesis hypotheses kind of guy. But I had forgotten when I did my little hundredth episode series, and I think it was in answer to the question, what is holding us back? I was surprised that you had mentioned our concept of memory, our understanding of, or lack thereof, of memories. And so then I was reminded of that when you emailed David and I about that episode.

So I really have no agenda, like I told you before, but I wanted to get your thoughts on, well, that aspect. But then you’ve written about memory recently, too, so we’ll talk about that and a host of other topics as we.

[00:04:36] David: Just so I was sitting in the car driving from, I think, Cape Cod back to New York, and I was listening to a couple of your podcasts, and then I had not listened to David Glansman’s episode. And I have never met David, but I knew that one of my former research assistants, Emma Laurent, who’s actually GI Lorant’s daughter, and herself now a graduate student in psychology, she had worked in his lab, and the name sort of crept up, and I said, I really must listen to this. And I was absolutely fascinated. That was a really great discussion. People should listen to that episode also because of the really cool historical examples and perspective that Klansman gives in the discussion with you. And of course, yeah, it stimulated me because I’ve been thinking about that stuff from a slightly different perspective. But he is much closer to the kind of perspective that Randy Gallistel has also articulated. And these people have spent a long time thinking about the kind of conceptual challenges of memory. And I was like, all right, this is not silly.

When you hear this kind of challenge for the first time, your sort of knee jerk reaction might be, yeah, a bunch of old cranks. Their feelings are hurt that some papers from the 60s weren’t carefully read or something like that. Then you get into it and you’re like, well, actually, they have some really deep, challenging points to our conception of memory and storage in particular. And so I was very unmoved by that. In fact, I had organized last year in Germany a little workshop called Beyond Associations, which was organized a little bit about Randy Galvestel’s thinking. So this topic is not dormant anymore. I think an increasing number of us in cognitive science and cognitive neuroscience are aware that there’s a serious, serious problem.

[00:06:47] Paul: Well, I mean, David that it’s going.

[00:06:49] David: To possibly hold us back. Yeah, go ahead.

[00:06:52] Paul: Well, I was just going to mean you said it’s not being ignored, but in my conversation with David, his woes in obtaining funding and the arc of his research lab, the size of it, has dwindled, et cetera. One might take home the opposite conclusion.

[00:07:08] David: Yeah, I mean, I think that’s because he was on this early on, as I guess was Gallistel, maybe they’ve been thinking and writing about this for ten years, and that’s been tricky. But now I think in the last couple of years, their message is being received more.

I wouldn’t say enthusiastically. On the contrary. I talk to my colleagues and friends. They think it’s outrageous and stupid, and we have well developed synaptic theories and get out of here.

I think a growing number of us coming from different fields, as I also then wrote to David Glansman in response to the episode with you, we’re thinking about similar problems because we don’t have a good story.

Glansman, in his discussion with you, mentioned all these wonderful and really quirky historical experiments on planaria and chopping up this, that, and the other thing, and the problem of what does it mean to have nuclear storage? And likewise, when you talk to someone like Randy Gallistell, he brings computational challenges. Like, how would you in fact have something like stuff that you can put together if you have only synaptic mechanism? Very difficult. This is fundamental problems like compositionality.

Why does this relate to what we do at all? Because we have a very concrete question about storage, which is you have a vocabulary. This is not a complicated idea to get across, like we’re having this conversation. We have a bag of words in our head, and it works pretty well, and we have pretty decent estimates across languages what that means.

You have to have a story for that. You can’t just blow that off. That’s one of the core things of how we communicate, think, deal with ourselves. So that problem, and the reason I reached out to David glanceman is not to say, oh, we’re very close to have an idea about how the storage of your mental lexicon works. Absolutely not. On the contrary, we don’t know our ass from our elbow.

[00:09:22] Paul: But that’s a good alternate title. You sent me this piece that you wrote, a short piece in Trends and Cognitive Science, and the title of that says it all. We don’t know how the Brain stores anything, let alone Words. But I like your recent phrasing there as well.

[00:09:37] David: Yeah, I mean, this is a more muscular phrasing.

The point I wanted to make or share is that not that we have any kind of systematic, sensible understanding of how stuff is stored in the language domain. No, not well. It’s that we have pretty good theories, cognitive, linguistic, computational theories of what has to be accomplished. That is to say, we can kind of decompose the problem in some sensible way and say like, look, if you can’t do ABCD, you’re toast.

Let’s just take some facet of the memory stuff that you have to accomplish somehow. And so maybe that allows us a kind of way in to have more mechanistic implementational approaches to the problem. So the computational theory to be Marian about it is pretty well worked out and we have alternative implementation levels of analysis. Well, algorithmically again, we are kind of stuck. But that’s where I think there’s a connection between people like David Glansman’s concern and he has a very nice paper where he kind of goes through the history and the arguments and why there is actually not so much of attention or there’s a possibility to resolve the tension between the synaptic mechanisms. It’s not like they’re not there.

But then the challenges of long term storage which may or may not be intracellular mechanisms.

I think there’s a game in town and the fact that it’s unpopular probably means that there’s a there there well.

[00:11:17] Paul: I don’t know how you feel about this, but one of the things I’m recurringly struck with is I don’t know if in neuroscience, with the modern computationalism approach, everything is computation. If we’ve lost sight of the fact that the super impressive fact that all of these things are nested across levels, right, whether you’re talking I mean, of course the synapse for things to happen at the synapse you have to have intracellular machinery, biological processes, right? So there is something happening in the cell.

But is that the nexus of memory? Do we need to locate memories inside the cell? And how these things interact I think is one of the to me, just recurring impressive feats, if you could call it that, of biology. And I don’t know if modern neuroscientists missing that or not appreciating it enough and I don’t know your thoughts on this.

[00:12:10] David: Yeah, no, it’s super impressive and I think that the attempt to at least sort of speculate, productively speculate or theorize about hypothesize, about what is the relation between all this rich intracellular stuff and what happens to the outside. How do you is totally open and it’s impressive that it works at I mean, look, suppose somebody like Glansman or Gallistel or others that know hesram Aglakpur, who has this extremely interesting theory of RNA based computation, super interesting. So if these guys suppose one of these guys are right and let’s say you store some item in the case of Gallistel, he’s really about number. He thinks a lot about numeric, cognition and particular inserting values, variables into equations and stuff like that for navigation. Or suppose you’re me and you want to store a word, syllable, cat video or syllable, then how do you then externalize it to the surface so that information suppose it is a series of ones and zeros at the most abstract level, how do you actually then convey that so they can say two cats.

So is that actually all done in the cell, or is that cell? So we still have synaptic mechanisms. We have communication between cells that’s probably underestimated in its complexity.

But you have to then get it out. You have to get the information in, and you have to get the information out. David Glanceman has some interesting ideas about this, and I think it’s important about the short terminus of synaptic mechanisms. And let’s say a sort of evaluative mechanism. Maybe that’s a good way. You don’t want to write everything into cells or something like that. But anyway, there’s a kind of rich series of problems where we open our textbook, whatever candel bear any of these things nickels. And we say, oh, look at this theory of the synapse. So cool. It’s glutamate awesome.

And then you’re like, well, but wait a minute.

Now let’s actually think of what you have to do. And so I think the fact that it works is amazing. But the fact that we don’t even have the intellectual courage to say we’re missing something really profound in our understanding, that’s kind of lame.

That has to change. And it’s your job to make a change by pushing it, by having people keep talking about it. Okay, good, that’s on you. It doesn’t work.

[00:14:57] Paul: Small responsibility.

A lot of the experiments that Glensman has done and talks about and you kill an Aplesia or you extract its RNA, you don’t have to kill it. You extract its RNA. I guess you do kill it anyway. You extract its RNA, you put it in a new one, and that new one all of a sudden has a, quote, unquote learned behavior or some sort of behavior is transferred. Some people say, well, okay, that’s awesome, but that’s really low level. It’s like a procedural memory. It’s just behavior.

For instance, in reaction to this episode with David Glensman, someone wrote in our little discord community, they said, I know it’s almost cliche to ask, quote, but what even are memories, end quote, at this point, because I feel like we run into that question with almost every guest. And he goes on to articulate just some confusion about, well, he has trouble thinking about storage at the intracellular level, at the synaptic. How do we even approach thinking about this?

But anyway, back to the kinds of experiments that have been done. And what I want to get to is ask you, how do you feel about what’s your bet about whether this is a feasible thing with higher level cognitive stuff like words, concepts, et cetera?

Because what we’ve seen so far is, I mean, it’s sophisticated monarch butterfly flying patterns, et cetera.

A caterpillar’s brain gets I think I have this right gets totally disintegrated and reformed, and it has the memory of what it was taught, et cetera. But you could say these are kind of low level cognitive feats, perhaps.

[00:16:33] David: Yeah, why would it be different in kind. I mean, it’s information of some form that needs to be stored. It needs to be written down, needs to be written out, right, stored and retrieved to be used for subsequent steps, operations, computations, whatever metaphor you prefer. And so I don’t see why it would be different in kind. It would be a tremendous, tremendous feat and super informative if we knew anything like that about the Caterpillar DeMoss transformation or how a word is stored and all. I want to look my non contribution, but just sort of stimulation.

Again, back to the word stuff, because it’s uncontroversial that we store such things and that we have them in long term memory and that they’re in some sense abstract or complicated or whatever.

Why would actually navigation be any less abstract, by the way? But that’s for later. So we know certain things about this. So, for example, here’s what we know. You make contact with whatever that stored thing is through sight, through sound, through touch, you have pointers from so that already tells you something about the format. The format must be sufficiently flexible or abstract. That different sensory modalities, in fact, any sensory modality can actually reach that store thing. So that’s already pretty interesting.

It needs to be in a kind of format that you can actually take another thing and stick them together, make a chain, make a different kind of representation. It needs to be connected to something that generates output. So a motor coordinate system, so we have certain criteria or kind of desiderata to be more hoity toity that simply must be met on a logical ground. And I think step by step, peeling away the layers are just kind of interesting computational clues on what has to be accomplished. And if it turns out that one of those little clues is solved in planaria or in the Caterpillar demon transition, that’s fine. I mean, we’re looking for mechanistic hints of how that could even be accomplished.

My complaint about the literature is and I’m actually working on a paper with one of my graduate students is we have been seduced by focusing on the implementation level of description when we talk about these problems, because we have cool experiments working on synaptic stuff. Because they’re doable they’re cool. They’re, by and large, replicable, if you do them well, and so on and so forth. And we’re building an edifice of descriptive stuff that we have kind of been much more cavalier. And this is, I think, where Randy Gallsto is so important about saying, well, what has to be accomplished?

The what is memory for? What does it mean to carry information?

Mean that is the most core sense, what memory is, right? So you have information that’s stored and carried forward and can be used.

[00:19:56] Paul: Well, that’s storage. Is that memory?

What should we think of as memory encircling? Because there is storage, there’s the encoding and then there’s the retrieval. Is all of that memory. But you just said storage is memory. So how do we think about memory?

[00:20:15] David: Okay, maybe memory is just a not very useful term anymore.

Maybe it has too many subparts. So you’re right.

I just informally mean it’s the stored information that can be written in and written out. So there’s an encoding that has to happen, a long term storage that has to happen, and a retrieval so they can plug these things in. That’s the cool thing. Whether you’re talking about low level things or high level things. Let’s talk about the ant navigation stuff. Those are small brains and it’s not trivial. You pull out some value, you have a counter or an Odometer, and you say, well, I have to put this value into this equation that I get out this vector.

That’s pretty cool, that’s pretty amazing. But it’s very specialized. And so you don’t need huge integrated information theory or global network space theories of consciousness to solve that. You need a circuit that does that. But the question is, are we, in our enthusiasm of just focusing on description of implementation, have we lost track of the problem that’s actually trying to be that’s under the microscope that we’re trying to solve?

So we’re so excited about every technical advance at the level of whatever. Now, single cell transcriptomics, cool. It’s amazing that you can do it, and it’s a wonderful description, but it’s not is it going to yield explanation?

And that’s where I’m not so enthusiastic and I think we’re being misled.

[00:22:02] Paul: So you’re not a fan of the Ingram, the modern story of the Ingram?

[00:22:09] David: Why would I not be?

[00:22:11] Paul: Well, when I think of Ingram, what I kind of think of so I had Tomarayan on, and I think of that kind of work, tonagawa, et cetera, where it’s basically like these cells kind of get labeled, right? And then you have a pattern of cells and that pattern is the physical trace. And so they get tagged, essentially. And the tagging of the cells is another issue to talk about. But I think of it in terms of the cell level and a pattern of cells, not the internal intranuclear storage mechanism.

[00:22:44] David: Yeah, but I think that’s a debate, and Ryan has done wonderful experiments on this. I think the question is what’s the physical basis of memory? And their notion of Ngram is just a different notion of Ngram than someone like Glanner Gallistel. So I think they’re working on physical trace of memory.

[00:23:05] Paul: That’s what the Ngram means.

[00:23:12] David: There’s a nice paper I think it’s in Cognition by Gallist called The Physical Basis of Memory, where he challenges he lays out the arguments why it’s so difficult to do it in a synaptic way. So I think we’re all I mean, maybe we’re using terminology in too sloppy ways still because we have this sort of historical baggage.

Maybe the message for me is it’s really critical to be a splitter and not a lumper. It’s like you said, what is memory? Well, maybe let’s be more principled and careful and be like, wow, it has all these parts. Which ones are the ones where we’re really maybe the encoding part at some state that’s not a one size fits all thing that has lots of complicated steps. Maybe some of them are more obvious and some of them are totally non obvious.

They’re not problems, they’re mysteries.

So maybe that’s one of the things we can do is just to be much more careful.

Pedantic splitters splitting pedenty as a virtue.

[00:24:12] Paul: Splitting pedantry. I mean, maybe we’ll get into this in a little bit, but you’ve recently done some work with large language models.

The paper suggests, or argues that, well, we should use more of what we know about human memory and basically try to build large language models with augmented memory, which is already there’s a lot of that work already being done. But your work says well, there are these subroutine processes that we know have to occur for encoding and storage and retrieval, so maybe we should use those to inspire large language models. And you guys have done a little bit of that work.

[00:24:50] David: Yeah, that seems to me just like much of science be, let’s say Pragmatically opportunist stuff is hard and so look to where you can to get little tweaks and whatever, a good pair of scissors and a screwdriver to put the thing together to help you understand stuff. I think it’s really isn’t that what we always do. We look around, we’re like, oh, that actually could be really useful to solve this problem over here. So this requires pliers.

So oh, look, there’s a pair of pliers.

I think Pragmatic opportunism has to be the partner to principled theorism.

I like sort of theoretically inspired research, but in our day to day lab work, we would be crazy not to know the techniques we have available. I mean, one of the things I’ve worked on well, I’ve worked on one of my postdocs has worked on for a long time now, a postdoc named Yui Sun. He’s a postdoc at AZ in know computational approaches and sometimes just counting and carefully construct a thing to ask the question, what is the parts list?

We’re like mechanics. That’s how I see ourselves. What is one of the parts? And he’s just a question, as you’d think, very banal question about, oh, what’s a syllable? Turns out to be extremely difficult theoretically computational.

We just had a paper come out a week ago or so where that’s many years of work of UA carefully constructing the argument that it’s a primitive, it’s one of the basic Lego blocks of the language system. Now, you’d think that would have been settled 100 years ago, but it ain’t so.

So it requires a lot of work and a lot of kind of at the intersection of theory and just basic computational stuff to figure out that that’s a Lego block. But that’s cool. That’s good to know. Now we know, okay, so now we have these three Lego blocks or something like that, and maybe we can have some other Lego blocks.

But you have to be willing to look to all kinds of weird things to make these cases.

Maybe that’s more like Paul Fireabban’s against Method in terms of philosophy of science. Right? So there’s a little bit of chaos, and then there’s these moments of insight that rejigger your conceptualization of the have.

[00:27:37] Paul: This is an aside, and I’m sorry for that, but how have you so you strike me as someone who has always kept your head above the clouds and kind of trying to see the big picture, and you have a philosophical bent, and you always seem to have. But how do you stay in both worlds? At the low level, you’re the mechanic, but you’re also the philosopher. Right. And how has that affected your career, do you think? Has it helped? Do you recommend that to everyone? Everyone has different personalities and does science differently, et cetera.

[00:28:14] David: I don’t recommend it.

[00:28:19] Paul: I can see you can kind of get sucked down into the inability to move forward if you see all of the problems.

[00:28:26] David: Right, yeah.

As an advanced, middle aged, privileged professor, let me say one, as most of us know, a good thing to do, although I don’t know if it’s good in the sense of history of science and philosophy of science, a good thing to do is pick a problem and beat it down. So there’s nothing wrong with working on your favorite subtype of the NMDA receptor for 20 years.

Absolutely nothing wrong. Something important will be built that becomes sort of the test bed for further experimentation and theorizing. And so focusing on something that’s making the problem well defined and sharpening it and really sticking, I think it’s very good. But then if you have intellectual ADHD like I do, it’s very difficult to sit still because I feel like I’m not even close to what I’m actually trying to figure out. So for me, this metaphor well, it’s not a metaphor. I mean, the notion of a parts list is very important.

It’s how I think about stuff, or the notion of primitives in cognitive science or, if you’re more philosophically inclined, the ontological structure of the domain. And it’s important to me because I understand the nature of the I get it. I could say, look, there’s a bunch of parts, and it’s a little more straightforward. In the case of the implementation level of description, you could say, look, there’s cells, and these cells have the following parts, and then there’s an endoplasmatic reticulum, and there’s this thing, and there’s a protein there, and you can make a list. And making lists is in some sense, very satisfying. It’s also tricky because, as Feynman says, there’s always room at the bottom.

Smaller and smaller and smaller. But you can have a clear sense of primitive or elementary pieces and elementary operations. That’s something that we do and it’s very clear to understand. When we look at the implementation of tissue in biology, I think we can do the same thing in the psychological and cognitive sciences or the computational cognitive science or whatever. Who cares what we call it? That is we subdivide the thing into what we think are the elementary constituents there hence my example of the syllable that’s for me now one of the primitives, that’s a Lego block. It’s necessary for perception, for production and for storage.

I think that’s a known known as you were mentioning. I think of this in terms of kind of linking hypotheses. If I’m convinced that the list of items that we have at the implementation level is pretty well developed or well motivated, and likewise I have such a list of elementary representations, although that’s a red flag for computations, then the problem becomes a little bit more clear to me and simply for me. I think about it this way at the level of sort of philosophical analysis of the problem, because I’m too dumb to think about it otherwise. I need like lists and arrows. That’s how I can think about it very clearly because I know it helps me define the problem. It helps me understand that look, this is extremely difficult because we know this is done and we know this is the stuff of brain and this is the stuff of mind. How is this supposed to work?

Here’s a way to do it be a dualist, very elegant, no problem, our work is done. If you’re not a dualist, then the shit has kind of hit the fan. And then we have to really go to great lengths to figure out even the most elementary things such as storage or encoding, storage and retrieval.

That’s one of the most elementary things that presumably nervous systems are for.

And so if we don’t have that right, then how are we going to make progress on anything else?

There always is a back and forth between nitty gritty experimentation and then zooming out and trying to figure out well what is this about?

Does this have an aboutness? And for me, because of my ADHD, I just like to read stuff about different things and I found it interesting.

[00:32:50] Paul: But you don’t recommend it.

[00:32:51] David: I do.

[00:32:53] Paul: But you just said you don’t recommend you began with saying that you don’t recommend your approach to aspiring.

[00:32:58] David: I don’t recommend it.

If you feel like you have to make steady progress and you feel on safe ground, I don’t recommend it. Stay at the implementation level of description and work on a very particular thing. If you’re already destabilized or neurotic or you like a lot of things, then I recommend it for two reasons. First of all, access to extremely interesting ideas that are old and that we just forget at our own. Peril and kind of new ways to think of an older so let’s take Plato.

I mean, the notion of the dialogue mino M-E-N-O is a pretty interesting discussion about how can you know something new? What is the notion of discovery?

And when you read that kind of stuff, you’re like, actually, that’s not such a stupid that may be 2500 years old, but these people weren’t idiots, right?

You’re like, Well, I actually have to think about that because it makes interesting arguments for did you have to know it before? Does it have to be innate? How would you actually know it at all? How could you actually discover something new if you didn’t know there was a hole? I mean, it raises sort of the logical inconsistencies with our notion of science and our notion of epistemology or knowledge accrual in a very interesting way. So it’s worthwhile reading that stuff if you’re inclined to think about that way. But the danger is that you then get, as you pointed out, you go down a bunch of rabbit holes. How can you reconnect it to experimental questions that you’re trying to answer? That’s the real challenge. And then sometimes it just goes wrong.

[00:34:53] Paul: Well, this is related to the question.

[00:34:55] David: Papers just as a recommendation, which I tell people in my lab all the time. It’s one of my biggest recommendations. Also, just read old stuff, not just because it’s just because it was written better.

The papers read right now are just excruciatingly, boring, cookie cutter, formulaic schlock. They all look the same, they sound the same.

But these older papers even, like, read Journal of Neurophysiology papers from 50 years ago. They’re beautifully written, they’re fun, they’re interesting, they’re quirky. You see someone, like, working out an idea, it’d have, like, 60 figures and you’re like, wow, that’s real intellectual engagement of the problem. Our papers are like, figure two L.

[00:35:32] Paul: Yeah, why has that changed?

[00:35:39] David: I mean, it must be for the better somehow.

[00:35:45] Paul: When you were mentioning earlier I’m going to kind of switch gears here about the things that we know we need implementation level wise. And we were discussing this in terms of language. I don’t think so. You need your sensory modalities to be connected with the storage, to be connected with the motor modalities. And what you didn’t mention is that you have to do it quickly at the milliseconds level time frame. And you also didn’t mention Oscillations in that list.

So I wanted to bring that up. And time and Oscillations and large language models and what you think of the large language models and what they can and can’t tell us or how have they affected your thinking about language multiple, realizability, implementation level, et cetera.

[00:36:34] David: At the moment.

Well, let’s go outside in.

What is science about?

[00:36:42] Paul: Wow, there you are, above the clouds again.

[00:36:50] David: There’s a point, but let me try to sort of see if I can get the questions step by step.

I think one way to think about that is historically, a kind of simple distinction is one way to do science. And what science is about is prediction and control.

[00:37:10] Paul: I see.

[00:37:12] David: Those are two very foundational things, and they play a huge role in science and also in engineering.

A different way and a complementary not necessarily completely, but a different way of thinking about it. And a different way of practicing science is, let’s say, explanation and understanding.

And there are cases where these things come together very elegantly, and then there are cases where that doesn’t work at all. In the worth of Ruth Langmore from Ozark, we don’t know Shit about fuck.

[00:37:47] Paul: That’s a television show. Okay.

[00:37:52] David: Very good.

The current work on engineering of speech and language and using models of that form falls right into this difficult tension. So it’s very clearly useful. So let’s first of all dispense with is it useful? Yes, it’s very useful. It can do cool things.

It’s not what this is about. Yes, it’s cool that you can do all these things, although one might have some ethical concerns, energy consumption concerns. There’s lots of complicated debate that’s worth having actually, just on the science.

The kind of work is more on the side of prediction and control.

These are systems we build in order. And obviously the notion of prediction is at the very center of this because it’s predicting the next thing and control in the sense of engineering theory.

And we can use that. We can capitalize on models like that to analyze our data, to think about what we can learn and so on. The question is, does it meet what we think of in the sciences when we’re trying to do understanding and explanation?

There? I’m not so optimistic at the moment. I think that they’re super cool and they’re also super far away from what it is that we’ve accomplished so far in 100 years of psychology and cognitive science.

Certainly, I think it’s a very interesting test bed for looking at things and for developing ideas. I do not at the moment think they have any particularly good relation to our criteria of explanation and understanding of a domain as complicated as language.

Now let’s take these big model. Suppose we start adding interesting stuff to them.

Let’s call it biologically inspired AI or something like that.

That’s an interesting question. Is that going to actually open new ways of thinking about what the models do, how we think about hypothesis formation, what we think of a theory is supposed to account for, and so on. So there I think it’s early days, but for the moment, I very much appreciate the engineering contribution and how we can use models like that. For example, data analysis or labeling. Yeah, very cool. Very useful. Actually super totally different questions. If we’re standing around saying, like, how is this actually going to answer a particular question about, let’s say, storage?

I have no idea. I don’t even know if it’s askable.

I’m both optimistic about one aspect of it and I’m sort of like meh about the other aspects. And, you know, remains to be seen. I mean, in part that has to do again with that models of the form are high at the level of implement. They load on the level of implementation and not so much on the level of sort of computational theory or explanation. I think that divide is a very profound one.

[00:41:19] Paul: But even the storage question, you could say that there’s an argument to be said that well, the answer is obviously they don’t have nuclei, they’re not storing things internal to the units, et cetera. So they are stored at the connection weights in some sense and pretty long dependencies as well.

[00:41:43] David: If we assume that there are sort of analogies, then the analogy would be to kind of empiricist models of behaviorist, empiricist associationist models of storage and computation.

So in terms of the way these things are conceptualized built and then talked about, they’re much more aligned with the notion of synaptic type of I mean, that’s what they build on, right? The notion of oh, well, there’s weights all over the so they’re they align well with that. And there’s the challenges. Allah. Glansman, Allah. Gallastel, Allah. Johansson, allah. Peter Balson’s on.

I think it’s just simply not arrived there.

Take let’s take the challenge of oscillation. So suppose you build a mean. I have two colleagues, in fact. The very distinguished visual neuroscientist Wolf Singer, who’s a member of my institute in Germany, and his team, most notably Felix Efenberger, a postdoc in that lab, have worked with building large models in which they explicitly use the notion of a damp harmonic oscillator as being a key of every unit. And then they’re sort of saying like, look, that’s our notion of what certain layers of cortex simply have as part of their infrastructure. Say, look, super granular layers, whatever. Here layer two has let’s just conceptualize this as every unit or every node being a damned harmonic oscillator.

Pretty interesting results, I believe the paper is not out yet, but they’ve built this and they’ve really tried to sort of analyze sort of a range of kind of canonical tasks and new things. And we’re collaborating with them now to test it on, for example, perception things.

So that’s a way to go. That would be let’s call it it’s not really biophysics, it’s not a biophysical model, but it’s sort of biologically inspired in the sense of let’s take a feature of cortex and see what does it add.

It’s sort of a hypothesis generation model. It says, if this works, that’s cool. Let’s see if we can now turn that around and experiment. So I think there’s a lot to be learned, a lot to be gained. So there is even in the case of something as contentious as Oscillations, I don’t actually know why it’s so contentious. I think it’s just people get really exercised about it. Those are things that are physiological excitability cycles. People have shown it for a long time. Sometimes they seem clearly to have causal force, sometimes not.

Nobody gives a single one size answer for all of this. People are nuanced. They know it’s complicated. They know you have to be very careful about your analytics.

It’s sort of become a self sustained debate about there’s controversy but no issue.

[00:44:54] Paul: Yeah well this just goes back to the difficulty in thinking across levels and about circular causality right because you need to explain things at one level that’s what we’re comfortable with and as soon as you start going across levels all.

[00:45:08] David: Hell breaks loose because we don’t have good linking Hypotheses.

[00:45:14] Paul: There you go. That’s the special thing.

Yeah, that’s another thing that Jeff Shaw, when I was a postdoc in his lab, every year we read a handful of papers and Davida Teller’s Linking Hypotheses is one of those papers. So you have kindred spirits out there still.

[00:45:32] David: Yeah, no, no, I think it’s a very I mean, look, I don’t know if you’ve had I’m not sure if you’ve had on your podcasts Catherine Carr from the University of Maryland, College Park.

Catherine is absolutely brilliant neuroethologist who’s done really foundational work.

For example, some of her work that she did jointly with Konishi on Barn Owl Sound Localization is just the kind of work that I aspire to for the speech or language case or any aspect of cognition for that. Matter because they have really worked out the nuts and bolts from the cellular subsellular level to what kind of math is being done, to what kind of behavioral task that can be made explicit and quantified is being solved. And it’s one of the most beautiful pieces of biology. I think it’s totally underappreciated because that is one of those linking things, one of the very few examples I know where the across level analysis is totally successful. You should really speak to Catherine. Joseph has a very evolutionary take as a neuroethologist unsurprisingly, she thinks very evolutionary, really beautiful, someone who also does work like that, that’s really is girant, right, with his work on, for example, sleep in lizards or that’s I think very elegant so trying really to go from cells and circuits to very interesting and complicated behaviors. So I think one of the things that’s coming back that I’m very excited about is ethology and neuroethology and we’ve talked about this before, the value of being, again, principled pedantry, being extremely careful about the behavioral analysis.

That doesn’t mean you have to have everything in a naturalistic context. I mean, that’s almost impossible to do. But to be careful about is the behavior that you’re studying for an organism matched to the kind of question you’re asking. Or is it just like we do this because we actually have done this experiment for 25 years and it kind of works. I’m like, yeah, whatever.

Nice. Good on you.

I think bringing back this sort of ethological thinking is super helpful for where we are, including for the artificial neural network thinking. It’s very good. And it brings us away from this kind of the implementation what do I want to call it? The imperialism of thinking, like an implementationist.

This is the disagreement I have with Yuri Buzaki. I mean, Yuri thinks very strongly what he calls inside out characterize, characterize, characterize. Measure very carefully, and then the story will tell itself of what is the actual functional analysis of the problem. And I say, I think that’s asked backwards. I respect and love you, but I think it’s wrong. I think it’s actually you have to have and that’s what we call the implementation sandwich. There’s actually a secret theory underlying this stuff before you make your measurements. It’s just latent.

The implementation is actually sandwiched between latent theory and then actual explanation of what you’re trying to do.

And what neuroethology brings to us, what’s so exciting about it is it says, let’s try to really understand what this critter is doing.

Let’s observe, let’s measure, let’s think about it. What’s actually trying to be solved here? I mean, it’s interesting that this cuttlefish is trying to become like its background.

What’s up with that?

How is that possible? And so to think again, much more, to not immediately make the jump. This is not to say we’re not supposed to do neuroscience. Supposed to do neuroscience. We’re neuroscientists. Even if we’re, like, in my case, a self hating neuroscientist, let’s not immediately jump to the level of measurement. More measurement and more data collection. And let’s see. Well, can we characterize what is precisely the problem we’re trying to study?

Behaviorally, computationally, whatever domain you’re in, before making the leap to let’s just measure everything like maniacs, just because we can because the tools are cool. Yes, we can opt it. Let’s go ahead and have well, it’s nuts what we can do. It’s amazing the stuff we can do now at the level of the tissue. It’s fantastic.

But can you point to a case where you say, well, that nailed it, that solved that problem in person motor.

I think that actually our edifice of knowledge.

The body of knowledge we built is probably more solidly built on behavioral and psychophysical data and actually deficit lesion data from long gone. Careful, careful. Deep study of individual cases where you can very precisely characterize the now, of course, the granularity of analysis of the neurobiology is very coarse, but the functional specification has been very impressive. You read those papers from the they were beautifully done, very elegant deficit lesion work that says, oh, my goodness, who knew that? That actually dissociates so crisply. That tells you something, right?

[00:51:08] Paul: Are you talking about humans, though? I mean, I know you’re talking about animals, but I was going to ask.

[00:51:12] David: You about it’s more compelling, I think, what captures the imagination of the human cases. Yeah, but it was done, of course, in animal models.

Take the original multiple pathway stuff. I mean, this is I spent a long time, much of my career on this kind of dual stream model of perception that didn’t come out of nowhere, that came through originally.

The kind of big picture thing became associated with Michigan and Ungerlider, and a lot of Leslie Ungerleider’s work when she was early on in Mort Michigan’s lab at the NIH.

[00:51:50] Paul: But.

[00:51:53] David: There were earlier works on this notion of parallel pathways, solving different kinds of problems in the visual system was already shown by Jerry Schneider in the 60s in rodents, and showing distinctions between, let’s say, detectum and cortical contributions and so on. The notion that you actually subdivide the problem into computational subroutines to solve particular things, I mean, that was a long story and was shown I think the first papers I’m aware of were Jerry Schneider’s papers from, I want to say, the 60s, then unger lighter in Michigan really made it a big thing. And then people like us just took that. We just adopted it and adapted it. We say, well, that’s a clever idea because it shows you actually how anatomic subdivisions can actually help you solve certain sub problems. And Greg Hickok and I basically just built on that and said, hey, suppose that works the same way. That would actually solve a lot of our problems. And where did that come from? It came from lesions. The original idea came from lesions and animals that Schneider did with rodents. Unger leider in Michigan, then did it with primates, and then people like Goodale and Milner showed it in lesions in humans. And so that’s one of the examples of where kind of ethologically inspired animal work goes from rodent to primate to human to computation to higher order cognition, showing that the divide and conquer strategy is a kind of ubiquitous phenomenon.

That doesn’t mean we understand it fully, but there’s no disagreement that the something of that form is the right theory. Now, people there’s all kinds of different versions of this I unsurprisingly like our own, but that doesn’t make it right. It just means but basically there’s consensus that that is one way that nervous systems solve these complicated problems.

And so that’s lesion behavior. I mean, it’s all of that stuff together.

Again, people want a quick answer, like will large language models goodness, this took like 50 years to get to this rather banal insight.

It’s just slow. Stuff is slow.

[00:54:09] Paul: Yeah, but doesn’t it feel so fast right now, with the development of the bigger models, things are moving faster, aren’t they?

[00:54:16] David: Things feel like they’re moving fast. It’s not obvious that Insight is moving fast. I mean, the work is moving fast. And the fact that it feels a little bit like science is being I’ve talked about, actually thought about writing about this is science has become replaced by engineering, or basically, at least in my area, cognitive neuroscience. Science is basically regression.

Science is now correlation and regression, and that’s engineering.

And so everything moves very fast. If that’s sufficient.

Like we talked about earlier, if the notion that you’re trying to capture is prediction and control, that’s nice. I don’t know about control so much, but that’s good. If your notion is explanation and understanding, it’s not moving that fast.

What is it that you want?

[00:55:19] Paul: Yeah, well, that’s the thing, is I’m constantly coming back, so I’m self hating in this regard, I suppose. So we’re in the same boat there.

What is the real value? Because I want explanation and understanding, but then it’s really prediction and control that moves the world forward.

And it’s just a selfish sort of desire for me to understand things. But I’m not sure what the gain is, except for personal satisfaction. And I often come back to the notion that I don’t matter in this world, so what does it matter if I have an explanation? Understanding how does that really help? But then mostly I live in the world of, like, that’s actually what I want.

[00:56:03] David: Yeah, I mean, that’s a hard question for you and your therapist.

[00:56:10] Paul: Yeah, I should go to therapy.

That’s what you are. You’re my therapist today.

[00:56:16] David: I think you’re right in the sense that we have a very kind of instrumentalized view of this. We want progress of a certain form that makes us feel like something has happened, that we’re moving forwards.

But I think there are lots of cases where that’s just not I don’t think that certainly these things are mutually exclusive. Like, the more understanding and real explanation you have, it doesn’t cross cut the value of prediction and control.

Take something like celestial mechanics.

We have pretty good understanding and explanation of why things move the way they do.

At this point, we can’t control any of it, but we can predict it.

Where do you really care? It sounds more like you care about control.

In control, I would recommend engineering or the medical sciences, where you can develop a vaccine, a pill, a cure, a procedure, but that is not necessarily completely aligned with the sciences. Look, one thing that’s really just to say in terms of sociology of science, one thing that Germany has got right, and other countries as well. But I’m in Germany because I work there. I have more intuitions about it. The notion of having something like a Max Planck society is an amazing luxury for humanity in the sense that there’s funding, public funding, to pursue pure basic research, no questions asked, as an actual common good, as a good for society. So there are other parts in Germany, by the way, that are much more applied. There are the Fraunhofer Society and the Leipniz Institute, and those are really they’re more like NIH. The task is do something very specific we want to see an output of this. But the notion that there is something like a Max Planck society where the people and there’s 25,000 people work there, that you are actually encouraged and funded to just follow your hunch because we just don’t know. And probably 90 times out of 20, it’s just some rabbit hole. But there’s then, every now and then, a profound insight from basic research that changes that’s game changing.

And the fact that we value that, is that’s something, I think that probably existed more maybe after World War II in the United States. I mean, in the sort know, van of R Bush or why I think that’s actually pretty amazing that a country says we’re willing to spend public funding on people where I don’t know whether there’s going to be a pill, a product, a program. We just don’t know.

And I could be wrong.

This is one of the things that convinced me to take a position there, because I assume we’re wrong. I take that for granted. That ten years people are going to look at that and be like, go, well, that was cute.

Or dumbass.

[00:59:43] Paul: Instead of that, really nailed it.

[00:59:45] David: Instead of that, nailed it.

Isn’t that an amazing thing that there are countries who say, look, go to it. I don’t demand that at the end of your five year funding, there is a clear step towards a pill or something like that, or a new hearing aid.

[01:00:05] Paul: But in the long run, the proof is in the pudding, right? Because the bet is that that will produce progress in some as to now unknown way.

[01:00:17] David: In an unknown way that likely won’t affect us. So it’s a form of sort of intellectual altruism into the distant future.

[01:00:25] Paul: Yeah. Okay, so distant future. So it’s really a long bet.

[01:00:29] David: It’s a long bet, and I think you’re right. I mean, I share that intuition that the assumption is I’m not allowed to say that because I do work for Max Planck, I’m supposed to be defending basic science here. But you’re right, and I absolutely have the intuition that the long bet is in fact, something will come out of it that has concrete how will we know until after the fact? I mean, we sit in these things, right? So you go to the NSF or the NH, and one of the criteria for your grant is is the work transformative.

Well, that’s nice. How would you know that until much, much later, right?

Do you honestly think without a deep sense of irony, I can write that paragraph into my grant without laughing at myself?

This is ridiculous. And I’ve said that in study sections and in meetings and as a member of the advisory board, can we please stop being just save that paragraph to add more interesting stuff about the ideas, or don’t give me the kind of bullshit paragraph we can save that for. Leave that.

Let’s just do our stuff and be explicit about what we do, be very clear, for example, be very good about how stuff can be replicated that is, in fact, useful. Can someone else do it? Make some minor variance and get basically the same stuff. It’s super, super important because then it allows us to build on that body of inquiry.

But forget the kind of game changing I am in my grant, and when I’m done, that’s really going to help.

[01:02:10] Paul: Stroke, schizophrenia. Schizophrenia. It’s always about schizophrenia.

[01:02:16] David: Depression, mild cognitive impairment. I’m like, yeah, okay, right on that’s. One of the shockers of the last of science is that we are grotesque under failure to understand mental illness.

It’s a debacle after 50 years or 100 years of neuro. It’s just shocking how bad we’re doing there.

[01:02:45] Paul: Well, okay, so this is what I was going to ask about with the neuroethological approaches you were talking about and how far we can take that with humans because we have our own cognitive ontology of what humans are doing.

And maybe this will lend itself to thinking about these high cognitive dysfunctions right disorders. And does that neuroethological approach, will it neuroethological approach, can we even use it, apply it to humans? How far can we take it with humans when we’re we’re the things that we’re trying to understand? And so we have to define what we’re trying to understand. If you want to be a Marian, we have to come with a computational previous cognitive ontology and talk about what the function is and the computational goal is. But are we even good at that? I know this is we kind of talk about this ad nauseam sometimes on the podcast, but you mentioning that disorders made me bring that back in. Just curious whether we really can.

[01:03:48] David: There’s a couple of things to say. One is unsurprisingly. We have epistemic bounds, right? I mean, we are parochial. We have a brain. That is the way it is.

Certain things we can see, other things we can’t see because that’s our receptor structure, and that’s the same going to be for cognition. Certain things we can cognize. Other things will be outside of our epistemic bounds for reasons of our architecture, but that can bum us out in a big, big picture, but we’re not going to stop by that. And we can still ask questions as carefully as possible. What would be the neuroethological approach? Is this kind of tension between naturalistic experimentation, whatever that is, or just kind of characterization in the wild and controlled work? And I think both have extremely high value. Let me give you an example of both, just in my own line of work. So an example of really, let’s say, psychophysical pedantry taken to the extreme is a series of papers by my student in postdoc matias, Grabenhast and Jorgos Michel Reyas, on one particular question of a computation, and that is, do you? So one of the things about prediction that’s kind of ubiquitous in the prediction literature and especially in the temporal structure of perceptual experience is reaction times and hazard rate. So as you think something is likely, likely, likely to happen, your reaction gets faster and faster, right? So you’re standing at the traffic light. It’s not green, it’s not green, it’s not green, it’s not green. And then you step. So this is a long literature from over 100 years, and people think about the temporal structure of your experience because it’s so compelling. We all feel it all the time. So there’s a famous theory, it’s been around for a long time, and it’s been actually most prominently worked on by Mike Shadlin and other people in that field. Mike Chadlin in the monkey case, but others in the human case, which is the so called hazard rate.

The hazard rate is basically a function that says, well, as the thing gets more and more likely to happen, I get faster and faster. And you can see serpents tell someone. Now, Matthias Grabmast and Yogos Michel Reyes and have worked for years in very, very hardcore super reductivist experiments in the lab to test what is really the function that you’re building. And the answer in a series of papers from a lab is actually what you’re doing is just you’re able to extract a PDF, the probability density function of the event structure. You don’t actually need all the extra steps to get to the hazard rate. To calculate the hazard rate, you have to invert the function. You have to calculate the CDF and so on. There’s a bunch of steps, all of which are a little labile. So through very, very careful psychophysical reductive experimentation, you’re sitting in a booth and you’re doing the same super boring, non naturalistic experiment. You can, however, extract something very kind of fundamental, which is the calculation and extraction of a PDF is one of the building blocks. So just like I said earlier, for me in the language case, the syllable is a building block. For me, in the temporal structure perception world, extracting a PDF is a building inferring. I believe in that. Now, we’ve done experiments.

[01:07:22] Paul: You have to infer it from incomplete knowledge. You’re saying, yeah.

[01:07:28] David: That’S right.

So that’s an example of using very lab centered, non naturalistic, non ethological experiments. But there the hypothesis is super clear. It’s computationally, super explicit, it’s this function or that function. And then you do very precise experiments to adjudicate, and you fit a bunch of models and you say like, look, this what it turns out to be. It’s actually easier. It’s easier to extract a PDF than a hazard rate. I think it fits the data better. It works for hearing touch and seeing and so on and so forth. So we can make a contribution by saying our new hypothesis is the fundamental computational building block isn’t function X, it’s function Y. So that’s useful. On the other hand, could you. Do that.

So I think that you can make good contributions with these super reductivist lab based nonethelogical, on the other hand. So take the language case, they are increasing experiments and you were alluding to this where you listen to naturalistic stuff and you use large language models to sort of get at the nitty gritty and there you can actually make pretty nice. So one of a paper I’m very happy with, that I think is very cool, was part of the dissertation of Laura Williams. She was a graduate student at NYU. She’s now just started as a faculty member at Stanford. She’s an extremely accomplished neurolinguistics scholar and she did in one of her thesis, experiments with naturalistic narratives. So you’re doing nothing. You’re sitting there in the scanner, you’re listening to a bunch of stories.

But she had a very particular theoretically motivated question is what are you actually tracking and how much of it at any given moment? And so one of the very interesting and beautiful data she found, really, that’s best seen in naturalistic ecological experimentation is as you go through the stream of speech.

Of course there’s phoneme by phoneme, by phoneme, by syllable, by syllable, but let’s take the phoneme level to single speech sounds.

How many at a time do you have access to?

So she was able to show some very beautiful decoding experiments on neurophysiological data that at any given moment you can actually grab onto three. You can successfully at any time point that you sample, hear, as it were, or grab onto for perceptual experience, three phonemes as you go through at the same time, you can keep them separate. That is, they have a separate sort of representational identity. You don’t have confusions. It’s not that the three become a kind of gimish of messy, unseparable things. They are actually separable.

And so that only works if you have a completely naturalistic case where you don’t actually give people individual sounds or individual words. You have to have the stream happen. So you can make a contribution to our understanding of just spoken language recognition by doing a neuro. It’s a logically motivated experiment. So I think there’s again, you got to pick your weapon for the question. What’s the question that you’re trying to answer?

So even in the human case, ecologically valid experimentation, people are trying to have it now, walking around and having EEG headsets on and having conversation. I think that’s very ambitious and bold. I think sometimes it’s a little bit cheesy.

[01:11:03] Paul: More data collecting.

[01:11:05] David: If the question is, well, it might work, it might work, but it is.

[01:11:09] Paul: A lot more data collecting for perhaps that unprincipled reasons.

[01:11:14] David: Sometimes that’s the downside.

There can be a lot of just, yes, it’s unprincipled and often, unless it’s theoretically very well developed, it’s a lot of data mining without a question. And that is of course, something that there I get pretty unhappy or ungenerous.

[01:11:37] Paul: As a colleague or reviewer curmudgeonly one could say.

So, David, it’s a Sunday. I know that you take this most of your day to reflect on the benevolent Christian god, I have to go hang out with my benevolent family, so I don’t want to keep you much longer, but anything else on your mind that you want to get off your chest? Anything else bothering you?

Really? I just wanted to have you on to kind of shoot the shit about your thoughts on the memory thing and then get an update from you.

[01:12:12] David: Yeah, no, I think that look, again, if there’s any theme coming out here, it’s that the memory story is so foundational to no matter what aspect you’re in in neuroscience or cognitive neuroscience, we have to get a grip on it one way or the other, whichever team you’re on here. And I think you’re doing it’s an important service that you’re doing to the field in the sense that you got to have many people who are reflecting on something because we have to sort it out. We’re just kind of stuck. I mean, this is why I wrote this paper about things we never talked about language of thought, which is a.

[01:12:44] Paul: Completely different kind of oh, yeah, we do need to talk about that.

[01:12:49] David: Language of thought is an interesting philosophical idea, which I think is actually correct.

[01:12:55] Paul: Let’s talk about that because I did want to talk about that because it’s a huge topic.

You’ve written this piece talking about how you think it’s correct and some of the reasons why you think it’s correct. But within the article, you talk about how it kind of disappeared for a while, and is that true? Was the language of thought prevalent and then discarded, and now it’s reappearing? Is that a correct story?

[01:13:19] David: Yeah, a little bit. So the language of thought was very prominent in sort of middle aged philosophy, and the idea was, well, you have didn’t how do you think how does it actually work at all? What format do you think in? And then in the philosophical literature, most notably by the philosopher Jerry FODER, he sort of reignited that notion, I guess. Sixty s and seventy s saying like, look, if this is a thing, if we’re trying to figure out I mean, he was really wearing his more cognitive science hat, less his philosophy hat, if there is such a thing, of we’re able to think it’s very unlikely to be it has to be a pretty abstract format.

And let’s just call it unfortunately, he used the expression language of thought, unfortunately made everyone think, oh, he means language, which is precisely what he didn’t mean. That’s a kind of bummer of a misnomer. He mean formal system of thinking, and he was trying to develop a computational theory, say, like, look, for the kinds of things we do, you need stuff like variables, functions, something like predicate logic or something like that. Not a very bold claim. But it has things like variables, it has very abstract, it means you insert values into functions and so on.

But it was a very interesting idea. And it also said that, look, there’s aspects of thought that are separate from languages or separate from other domains, or separate from just, let’s say, mental imagery and stuff like that. And it’s a pretty interesting idea.

And then that idea became pretty unpopular.

Well, it wasn’t really addressed at all, but then the neuroscience world basically said, look, it’s horse shit.

[01:15:09] Paul: Why is that?

[01:15:10] David: It can’t be implemented. It’s the kind of stuff that simply we don’t it’s so far away from our notion of sensory systems or neural computation.

Such a thing can’t really be it’s a silly idea, but in the last few years people have been sort of thinking about it again. Actually, we’re trying to get our head around the relationship between know language, other forms of representation. And so Nina Kazani and I, our motivation was very straightforward. We think it’s a know whether it’s right or wrong, it’s a very interesting hypothesis. And we simply, as we wrote in the paper, we just disagree with the argument that it can’t be done. And the way we construct that argument is super straightforward. We say here what were the requirements for a language of thought? And then this is what you would have to have, okay, guess what? Here’s an example. Here’s a bunch of hippocampal cell types that do exactly what you think can’t be done. They meet all the criteria. So there’s filler, role independence, abstraction, scaling and so on. So it’s simply false to assert that it can’t be done. Neural systems already have precisely that kind of architecture. And we don’t argue that thought is spatial or it’s all hippocampus. We simply want to provide examples. Say, look, here’s a couple of cell types. They’re extremely well evidenced, well known. There’s a fact of the matter.

They embody precisely the things functionally that are ostensibly not possible. Therefore it’s a bad argument and the language of thought is in fact absolutely neurally possible. Our job is to figure out how do you do it, where is it done, and so on and so forth. But I think there’s a growing interest in how this thinking can we begin to get at thinking which is kind of internal and it’s in part separate from language, right? There are aspects that are just ineffable.

They can’t be externalized through language because language and thought are dissociable.

So that’s a fun and interesting way to think about it. So our demonstration is a very short and easy paper that simply says, hey, the entire enterhinl and hippocampal system is full of exactly the cells is doing it. So go back and try again. Different argument, please.

[01:17:25] Paul: But your argument is not you’re not pointing to the just a list of all the different cell types, boundary cells, boundary vector cells, object cells, et cetera. You’re not pointing at those and saying these cells are doing language of thought. You’re just using it as a reflection and saying, look, this is language of thought. A language of thought type or like or allowed process is being reflected in the activity of these neurons.

[01:17:50] David: That’s exactly right. No, it’s exactly as you’re saying. This is simply I mean, one can then have different debates and some people say, look, oh, we think a lot of thinking is in fact spatial, and so on. But that’s not the point. Our argument is precisely as you summarize it, we’re showing on principle here’s, the type of operation that cells have to do. Here’s an example of them.

Best wishes.

[01:18:13] Paul: So where are you with that? Are you continuing that? Or was it really just, hey, everybody, look, and then you’re moving?

[01:18:21] David: I mean, of course. So Nina Kazanina, who’s now going to be professor she’s in Bristol, she’s now going to be a professor in Geneva, is going to continue that. I’m going to continue it. But that’s for young people. That’s young persons game. It’s very difficult to but it’s again, it’s a case where there are well developed theoretical and computationally explicit ideas about what you know about, let’s say, predicate calculus or something like that. And there’s nothing for anyone who studies vision or navigation, this is not a surprise. You have to do mathy type stuff. This is not surprising why this is considered so outrageous baffles me, since to even get from here to wherever you’re going with your family requires all those things. This is not like I don’t get it. I just don’t get why this is.

[01:19:10] Paul: So objectionable, frankly, when you reference a few different kind of review papers that touch on language of thought as well, that I guess they’re all arguing that, hey, we do need to yeah. Section one starts the re emergence of the language of thought. This is from Mendelbaum et al. And there are a couple of reviews that you point that’s right, yeah, there’s.

[01:19:32] David: A recent I mean, basically in the last couple of years, there’s been really kind of acknowledgements that it’s a very important hypothesis about how the cognitive apparatus is organized. And we’ve sort of now blown and it’s time to kind of rethink very carefully what’s going on. And so, yeah, Quilty, Dunn, mandelbaum, Kazine and others are saying, like, hey, there’s actually a game in town, certainly for cognitive science, psychology, philosophy, and let’s actually see if it has very clear and good and testable implications for brain science. And I think I’m totally on board with that and excited to see it kind of come back because, again, just because people wrote it 800 years ago doesn’t mean it was idiotic. It was people who thought very carefully about thinking they didn’t have the apparatus we have, but they said, something smells funny. We have to figure out how we do this.

[01:20:19] Paul: How do you think they only had.

[01:20:21] David: The computational theory of mind. They had nothing else. No imaging, no autogenetics, no single cell transcriptomics. Just thinking cap.

[01:20:28] Paul: Yeah, but how do you think of this in terms of the recent popularity of the dynamics and attractors and state spaces? How do you think of that with regard to or compared to something like the symbolic operations needed for a language of thought to occur?

[01:20:50] David: I don’t quite understand why these should be mutually exclusive. They might be approaches that solve different kinds of sub problems or different problems in general. I think that the dynamic stuff is super interesting and important to pursue, but I don’t see a principled conflict to having that kind of machinery and having symbolic computation implemented. I just don’t see why that should be a non starter for both.

Look how weird stuff is in the brainstem. I mean, totally closed configuration, little nuclei that do, like, very people don’t.

Do we argue about that. I really urge you or urge you motivate you, invite you to consider inviting someone like Catherine Carr sorry.

From her work on evolution and neuroethology because there you just see a kind of generosity of spirit and biological principledness that shows you how you can actually link levels and not worry about like maybe this is a dynamics kind of explanation over here, but we really need symbolic computation to solve this equation. And there’s not brain is a complicated place, right? So I don’t see that they rule each other out and they might not be solving the same kind of stuff. And both seem extremely important and interesting to pursue.

[01:22:21] Paul: Okay, David. Well, I appreciate your generous spirit, as always.

Thanks for coming on. You kind of did it on a whim. Just I got that email from you. You sent me that paper. I thought I should just ask him. Come back on. So I appreciate you sharing your.

[01:22:35] David: It’S fun. It’s fun to talk about what we do, and it’s entertaining. Now you go play. I’m going to go to the beach.

[01:22:40] Paul: All right.

[01:22:41] David: I think I’m going to go to Rockaway Beach Fitness.

[01:22:43] Paul: Okay. Happy beaching. Good talking.

[01:22:45] David: Bye, ball.

[01:23:01] Paul: I alone produce brain inspired. If you value this podcast, consider supporting it through patreon to access full versions of all the episodes and to join our discord community. Or if you want to learn more about the intersection of neuroscience and AI, consider signing up for my online course, NeuroAI the Quest to Explain Intelligence. Go to braininspired co. To learn more. To get in touch with me, email Paul at braininspired co. You’re hearing music by the New Year. Find them at the New year. Net. Thank you. Thank you for your support. See you next time.