Brain Inspired
Brain Inspired
BI 164 Gary Lupyan: How Language Affects Thought
Loading
/

Support the show to get full episodes, full archive, and join the Discord community.

Check out my free video series about what’s missing in AI and Neuroscience

Gary Lupyan runs the Lupyan Lab at University of Wisconsin, Madison, where he studies how language and cognition are related. In some ways, this is a continuation of the conversation I had last episode with Ellie Pavlick, in that we  partly continue to discuss large language models. But Gary is more focused on how language, and naming things, categorizing things, changes our cognition related those things. How does naming something change our perception of it, and so on. He’s interested in how concepts come about, how they map onto language. So we talk about some of his work and ideas related to those topics.

And we actually start the discussion with some of Gary’s work related the variability of individual humans’ phenomenal experience, and how that affects our individual cognition. For instance, some people are more visual thinkers, others are more verbal, and there seems to be an appreciable spectrum of differences that Gary is beginning to experimentally test.

0:00 – Intro
2:36 – Words and communication
14:10 – Phenomenal variability
26:24 – Co-operating minds
38:11 – Large language models
40:40 – Neuro-symbolic AI, scale
44:43 – How LLMs have changed Gary’s thoughts about language
49:26 – Meaning, grounding, and language
54:26 – Development of language
58:53 – Symbols and emergence
1:03:20 – Language evolution in the LLM era
1:08:05 – Concepts
1:11:17 – How special is language?
1:18:08 – AGI

Transcript

Gary    00:00:03    We think we are all talking about the same things, uh, but often we’re just sort of using the same words, right? And maybe often reaching similar understandings. But underneath there’s this vast variability. It always struck me as strange, uh, to think that, okay, if you have no grounding, uh, to the external world, you know, you can’t have any meaning at all. But then, okay, uh, how much grounding do you need? Like if you put in a little bit, now suddenly everything is grounded out, and then meaning just magically appears. That doesn’t seem right. It doesn’t really make sense that you can take a thought and beam it into someone’s brain, right? And have that thought make sense to the other person. But of course, with language, that’s kind of how it is.  

Paul    00:00:59    This is brain inspired. I’m Paul. Welcome everyone. My guest today is Gary Lian, who runs the Lian Lab at University of Wisconsin Madison, where he studies how language and cognition are related. In some ways, this is a continuation of the conversation I had last episode with Ellie Pavlik, in that we partly continue to discuss large language models, um, but Gary is more focused on how language and naming things, categorizing things, um, how that changes our cognition related to those things. How does naming something change our perception of it and so on. Um, he’s interested in how concepts come about, how they map onto language. So we talk about some of his work and ideas related to those topics. And we actually, uh, begin the discussion with some of Gary’s work related to the variability of individual humans, phenomenal experiences, our subjective experiences, and how that affects our individual cognition.  

Paul    00:01:57    For instance, some people are more visual thinkers, uh, others are more verbal, and there seems to be an appreciable, um, spectrum of differences that Gary is beginning to experimentally test. And of course, we cover a lot more topics, um, related to language and cognition. Show notes for this episode are at brandin inspired.co/podcast/ 164 on the website, uh, brainin inspired.co. You can also learn how to support this podcast to help keep it running. Thank you in advance, uh, if you make the decision to take that generous action. Thanks to Gary for being generous for this time. And here he is.  

Paul    00:02:36    So I was in a, uh, oh, let’s call it a disagreement with my wife, um, the other day. And I was tasked with expressing my feelings, um, to her. And of course, I failed, as I always do, uh, because they’re like ineffable and words don’t really do justice. You know, I, I felt limited if I used, started using words to describe what I was feeling, it was gonna limit the actual experience that I had had or was having. Um, and I feel maybe this whole time you can just disabuse me of this notion. I feel that words are limiting, uh, in terms of cognition. Uh, yes, they’re highly abstract, but they don’t capture the essence of what I’m feeling. <laugh>, you know, feelings are a whole nother Yeah. Bag of tricks here. But, um, but why is it that I think that the most interesting things are ineffable and words don’t capture, like categorizing it using words is maybe limits it, if that makes sense. Yeah,  

Gary    00:03:36    Yeah, yeah. Um, the way I tend to think of it is that the currency of language is categories, and that makes it really great for conveying certain types of information and, uh, really bad at conveying other types. And so, whether language is good, uh, for communicating, depends on what one’s talking about. Um, now when it comes to feelings or for example, faces, uh, some faces are notoriously hard to describe in, in language. And it’s not just, uh, problem of English. No language is good at describing faces because what distinguishes one face from another, for the most part, is not categorical. And so the best we can do with language is, and, and this is very telling of how language works, is, uh, when it comes to describing a face, is to remind someone of a face that they know, right? And then say, well, it’s kind of like this person, but, and then put a little, you know, nudge them in some direction.  

Gary    00:04:35    But older, right? Or friendlier, we’re good at interpreting something like friend. It’s interesting that we are good at that. Wow. Uh, in terms of, you know, having some visual appearance. Um, but we, despite this, we figure out ways, and this is what a lot of, you know, what good writing is about, of conveying things like feelings, uh, visual descriptions, using language, but it’s hard. Uh, doing that effectively is hard work. Um, I think of languages, uh, a series of cues that we exchange with one ourselves. It’s little, little cues for constructing, uh, mental representations. And, uh, uh, in a lot of what we talk about, um, is actually, it lends itself to kind of categorical cues that language is really about. Um, but, uh, yeah, I mean, and, and my other response is, well, compared to what? So when it comes to conveying our, our emotions, well, what’s the alternative? I mean, you can do a lot with your face, uh, but it doesn’t have that sense of being about something in particular, right? Right. Uh, so, you know, in many cases, language might be the best we can do, even for conveying kind of ineffable things.  

Paul    00:05:52    So my wife is not dumber than me just because she expre uses words to express things that I, I can’t <laugh>,  

Gary    00:05:59    I mean, well, no, but is it, do, do you feel like it works? Like is from  

Paul    00:06:06    Her? Is she able to communicate her?  

Gary    00:06:09    Yeah. Is she able to communicate  

Paul    00:06:10    Emotions in touch with her feelings and Yeah, she’s great at it, and I’m terrible at it. And, you know, I mean, that’s, I think it’s unusual story, but I, yeah,  

Gary    00:06:18    There’s skill in, you know, on your end as well, right? In interpreting it. Um, so she might be really good at it, but like, it takes skill to, to be on the, on the receiving side, on the comprehending side, and actually taking those words and constructing something that is meaningful to you, and presumably, right? Even if it’s meaningful to you, but it’s totally different from what she was intending. Like, we’d say that that’s not a totally successful active communication. Um, but yeah. But often we can kind of meld our minds in that way with language.  

Paul    00:06:49    So I, I just waited out in deafening silence, and then eventually it goes away. I thought that was the right thing to do, but, okay. Um, so we have, you know, a a ton of stuff that we can talk about it. Maybe I’ll, um, I, I don’t know if I wanna start with, uh, inner speech. I was gonna bring this up later. I was telling, I had Ellie, uh, Pavlik on the podcast. Yeah. And I was telling her that I was gonna have you on, and that you’re interested in this, um, the phenomenal variability, like the variability in all of our phenomenal subjective experience. And, and part of that is our inner speech that, and I know that you’re interested in, and I was telling her, uh, that I feel like when I find myself talking to myself during a task or something, using language in my head, I feel dumb.  

Paul    00:07:34    Uh, this is not all about me being dumb, but, um, you know, a good portion of evidence. But I, I like, feel like why am I talking to myself? That seems in the same token, you know, I remember my dad, um, back when we didn’t have iPhones, but there were GPS systems that you could put in your car, and they were specific for gps, and they would talk to you, and he would talk back to it. And I thought, oh, man, that’s unfortunate. So <laugh>, so his inner speech, uh, a sign of intelligence or, uh, the opposite. And, and then we’ll talk about, um, variability in our fundamental experience, I suppose.  

Gary    00:08:07    So I had a paper that I wrote, um, a while now ago when I was a post doc about, um, talking to yourself, uh, in the context of, uh, visual search. So can repeating the name of something, help you find it. And, um, for whatever reason, like a year after it came out, it suddenly started getting all this media attention. And even now, like, I don’t know, dozen years later, like when you Google talking to yourself, like, this stuff comes up and the framing is often, uh, you’re not crazy if you talk to yourself, right? Like science has shown, uh, and it, and it gets away from the <laugh> the core narrative. It’s like, you know, if you talk to yourself, it, it, it means you’re a genius. It’s like, that’s not what the paper is about at all, <laugh>. Um, but, uh, I think, I think there is, yeah, there’s a, well, there are two things.  

Gary    00:08:58    Uh, there’s the actual overtly talking to yourself, and that, you know, at least in our culture, uh, that’s discouraged. And that’s kind of often seen as like, oh, you know, uh, is, is is there something wrong with this person because we expect it to be inhibited, right? And then there is inner speech, right? The kind of covert, um, experience that really a large majority of people have. And one thing I’ve learned, uh, since we started studying it is that, you know, we, we, we all tend to think, uh, and we reflect on our experiences. Uh, the common thing to do is to think that, well, what we have is the typical thing. Uh, and then you realize, you know, that, well, there’s a distribution and you’re somewhere on it. Uh, and so I thought my inner speech was typical, but it seems that it’s kind of like on the low end, uh, you know, there are questions that on these questionnaires where like, most people put themselves as a 10 on a one to 10 scale, like that they do this constantly.  

Gary    00:09:55    And, and I put myself as something like a five, but I, I, I thought that that’s where most people were. Um, so I think intuitively it seems kind of silly, like, how can you learn something from yourself that you didn’t already know? But of course, that’s, that’s the wrong way to think about it, because there’s no central u right? That knows everything. And that’s why, you know, when you’re writing, like everyone seems to have this experience, right? You learn something by writing you, you, it helps you organize your thoughts. And so I think of talking to yourself, uh, as a, as a similar type type of thing, uh, you’re lionizing things, you’re, you’re making connections that, uh, you’re obviously capable of making, but may not have made prior to actually kind of pushing it, pushing it out and back in. So Dan Dennet has this in his, uh, consciousness explained book, uh, has this little, uh, figure of, uh, uh, I remember the caption, this cognitive auto stimulation, and it’s like a little, uh, schematic brain with like these kind of semi connected parts. And then, you know, there’s an arrow that comes out of the mouth and, and back into the head, right? Mm-hmm. <affirmative>, Andy Clark has also written a lot about this idea. So, um, yeah, I mean, I think of language as a technology, and when we talk to ourselves, whether over lyric covertly, right? We’re using that technology, um, you know, to, to help ourselves think through things. And it’s good for some things less good, uh, less good at other things.  

Paul    00:11:30    What is it good? I mean, so I appreciate language as an abstraction, and a lot of your work has shown that labeling things, um, helps you do things faster and helps your cognition about those abstract general concepts. But I, I feel like, uh, the majority of stuff that’s at least interesting to me, labeling, one thing about abstraction is it takes away details, right? Uh, and so you’re kind of simplifying, uh, everything to a degree that might be, that might hinder thinking about it. Um, and so I, I’m not sure how to think about this. Like, what, where you think, um, the balance is,  

Gary    00:12:05    Yeah, I think about it being less about removing details and more about highlighting, um, dimensions or details that are relevant for a certain task. So you might, for example, realize that there is a connection between two, say, situations or two processes, that there’s some underlying similarity, uh, and, and helps, and it helps you kind of reach some, you know, conclusion, draw some inferences that you otherwise wouldn’t make. Uh, and so you are in a, in a way, ignoring certain details. I think it’s rare that we actually get rid of them. We, we hang onto lots of things, even if they’re not relevant to current tasks because they’re relevant for other tasks. Um, so for example, if we have a task of, you know, understanding what someone’s saying, so knowing what words they’re saying, um, it might not be relevant what their voice sounds like, but you can’t help but also attend to the qualities of their voice.  

Gary    00:13:07    Because at the same time, like those things are important for identifying like where they are and where you should turn your head, uh, to face them, uh, who they are. And that, um, those kinds of things, you know, can help, you actually can help feed into the meaning because you have expect, you can deploy different priors depending on, you know, the last conversation you had with this person or what you think this, you know, person is likely to know based on the, their, the specific jargon they’re using. Uh, and all of that kind of becomes relevant, even for something as simple as understand, knowing what word is this person saying, right? Going from like the raw speech to some, yeah. More, uh, inva representation. So, uh, yeah. So I don’t think we’re really throwing away the details, but yeah, we’re weighing different dimensions. Um, and so yeah, I’m, I’m happy by the way to circle back to inner speech later because I think there’s a lot to be said about that kind of variability and what it means about studying connections between language and cognition. Um,  

Paul    00:14:11    Oh, okay. I was about to bring us right back to, um, to that, because I mean, this kind of ties into the, the idea of the dimensionality of language, right? Um, so, so we all have different phenomenal experience. Um, and one way to look at that is that our phenomenal experience is important, and however different, however subtle those differences are, whether or not they make a difference to our cognition is something that you could tell me. Um, and, and you know, what, you’ve come to think about this, but then I also thought, well, it may be that we all have the same high dimensional stuff going on under the hood, but our phenomenal access to various parts of it differs. And that’s, that’s what leads to the, uh, differences in subjective experience. So maybe you can just elaborate on that, on that. That’s a ridiculous idea. Yeah.  

Gary    00:15:00    No, it’s, it’s, it’s not a ridiculous idea, and it’s, it, it’s really an empirical question. Uh, there’s probably some of both. I mean, that’s kind of a boring answer. But, um, so as far as variation in visual imagery, inner speech, um, there’s certainly work suggesting that in some cases the best explanation might be, uh, different access, but there’s also work showing different performance on objective behavioral tasks. So, uh, mm-hmm. <affirmative>, we find, for example, differences between subjectively reported inner speech relationship between that and, um, how well people can ri uh, judge whether two words rhyme, um, uh, their kind of what’s called verbal working memory. Um, uh, they, there’s also lots of, uh, correlations between subjectively reported inner speech and other aspects of subjective experience that we, you know, might or might not take at phase value. So for example, um, ear worms, you know, getting songs stuck in your head, very common experience. People who report not really having, uh, much inner speech also report that, you know, they, they know what this is about, but, uh, they report that this doesn’t happen to them very often. Uh, something that, especially in the college student population, uh, this experience of like, thinking about a recent conversation you’ve had and thinking about like, well, maybe I should have said this and that, like,  

Paul    00:16:27    Oh,  

Gary    00:16:27    Yeah. You know, huge endorsement of like, you know, how often does this happen to you? Most people are saying all the time, all the time, all the time. People with lesser speech are saying, eh, you know, nah, not, not, not much <laugh>. Um, right.  

Paul    00:16:40    I, I missed the, what, what, what about the, the ear worm? What was the relation between getting songs stuck in your head and inner speech?  

Gary    00:16:46    Just that people with lesser speech, uh, are less likely to experience it, so less, I wouldn’t say never, but yeah. Um, uh, but, but there’s absolutely value in trying to understand the sort of cognitive profile differences in cognitive profiles using objective, um, assessments. So more of this has been done in the, uh, uh, visual imagery realm than in, in the inner speech realm. And there you find, you find really interesting patterns where in some cases you find, uh, objective differences. So different patterns of, uh, recall and memory, for example, less visual imagery, less details in recall, even when that is the task recall in as much detail as you can, you know, some experience, uh mm-hmm <affirmative>, but in other cases, no differences in the kind of gross level performance. But then when you start going deeper into, well, how are people doing the task?  

Gary    00:17:38    What are the correlations between tasks, you find differences suggesting that people are using different strategies and, and it’s in line with what they report. So it’s a case where we should take people’s self-report seriously, because, you know, they’re doing some memory tasks and you ask ’em, well, how did you do that? And they tell you how they did that. And it’s not how people with typical visual imagery do it. And then you sort of do a follow up study. Well, you know, if they did it using this kind of strategy, then they should find interference with these types of stimulant. You find indeed that like the, the results bear it out. So these differences have consequences. Um, and, uh, but it’s, it’s lots and lots of unknowns. Um, another relevant area is synesthesia, where at one time people thought, oh, you know, yeah, people are having these different phenomenal experiences, but they’re not really like perceptual.  

Gary    00:18:28    And then it’s not that hard to design some actual psychophysics studies to, to test whether people’s attention, for example, is being kind of involuntarily grabbed by, um, you know, where, uh, someone who has, uh, various forms of space time synesthesia, where, you know, thinking about a certain month is associated with a certain part of space. And then you can see that indeed, unlike people who just, who who, who don’t report having this phenomenal experience in the sins, you cue them with a month and their attention kind of automatically goes to that part of space, uh, even when it’s completely irrelevant to the task. Um hmm. Yeah.  

Paul    00:19:12    So what are you thinking in terms of how wide the variability is in our inter, uh, difference in our differences, uh, between people in their subjective exp experiences? Is it a subtle thing? Is it a wide landscape?  

Gary    00:19:29    Um, I think it’s, it’s relative to our expectation. I, I, my hunch is that it’s, it’s much more than we expect. I stumbled on this video, uh, yes, just yesterday, I can’t believe I haven’t seen this before. It’s a bit of a Richard Feinman. It’s like a six minute clip called, uh, uh, ways of thinking where he makes this point, um, that, uh, and, and I’ll post a link to it or something. People can watch it themselves, um,  

Paul    00:20:03    That link to it. Yeah, sure.  

Gary    00:20:04    Yeah. Uh, he, uh, he talks about kind of how some experiences in his life, uh, that led him to think that there is this huge variability in how people kind of experience different types of thought, even when they come to the same conclusion, right? And that we, we think we are all talking about the same things, uh, but often we’re just sort of using the same words, right? And maybe often reaching similar understandings, but underneath, there’s this vast variability, and, and it, it can be studied experimentally. Uh, and so one thing that excites me so much is that, um, it really is an empirical, uh, an empirical question. And, um, it, I think one angle we’ve taken is focusing on this hidden aspect where, uh, these, these kinds of variability that seem to exist that people are, are unaware of, uh, because there’s lots of variability, um, in behavior that we all know, right?  

Gary    00:21:06    We all know that some people, uh, go to bed later than other people, you know, morning people. And the reason we know this is we get lots of feedback from the world. We know that people have different food preferences, right? And so, um, there is a tendency even there to think that others are more like you. Like if you like chocolate, you think more people like chocolate Yeah. Than if you personally don’t like chocolate, right? Right. But, uh, but when it comes to these hidden differences, right, like visual imageries, synesthesia, inner speech, one can compare notes. Uh, one can’t study it, but we tend not to, and we just project our own experiences on onto the world. And one reason why I think we can, uh, keep going around, uh, about our, our life and not realizing that people have these different experiences is that their consequences on behavior are not as big as one would expect, because there are, uh, it, it’s a more robust system.  

Gary    00:22:01    Um, then one can, so, so if there were lots of things that really, really required vivid visual imagery, then people who didn’t have vivid imagery wouldn’t be able to do those things. And they know that. But in fact, a lot of the things that we assume require imagery, uh, actually there are many ways of, of doing them without the use of imagery. And so it, it kind of opens the door to studying this diversity of solutions. Um, and we, we draw in this paper, we have hidden differences. We, we draw, uh, a comparison to, um, what’s been called cryptic variation, uh, in genetics, um, that a lot of, you know, you, you do gene knockouts, and you find that for a lot of things, like you don’t find any observable, uh, effects on behavior. And the question is, why should it be that way? Uh, like, why have this redundancy?  

Gary    00:22:57    And, uh, one answer is that, well, you have this redundancy to ensure that different developmental trajectories can all lead to a, uh, a functioning organism. That if you had to do things in a very, very precise way, it would, you would not have sufficient robustness, basically. And so, as a consequence, you find that there are multiple pathways. And so knocking out various genes often doesn’t have any consequence because they’re just one kind of path of many to getting, uh, that development. Right? Uh, and, uh, and, and I think of these hidden phenomenal differences, this kind of, you know, repeating that same process, but at a higher, at, at the next level where, okay, you have a functioning organism and you have to, you know, maybe in your culture, learn to read or learn to do math. Um, and well, there are actually many ways of getting there. And so, despite different starting points and different, different trajectories, you can get there, but you, you, you, you don’t necessarily do it in the same way.  

Paul    00:24:03    I had Dean Bono, uh, on the podcast a long time ago, and I don’t remember how, but I later learned that he has a, Fantasia, I believe is the term for it, where yeah, he doesn’t have any visual imagery. And I think that is so insane to me to be able to like, yeah. Think without visual imagery. But, uh, but that’s why I was asking about the, is it just a matter of like, what, whether we’re accessing it phenomenally or whether it’s actually going on under, under the hood and we don’t really know about it, or, you know, yeah. I guess this is the multiple realizability issue.  

Gary    00:24:35    Yeah. Yeah. I think for certain kinds of questions, the access question might not really matter, right? If the work is being done by the, the, the, the, so the part that is that we are conscious of, uh, by kind of interacting, um, with the sort of visual images that we, um, consciously experience, then even if at some lower level, it’s an identical system, right? Not having phenomenal access is what makes what might make the difference.  

Paul    00:25:07    Um, well, that’s what you’re finding is you’re finding these behavioral differences based on whether someone is having phenomenal access  

Gary    00:25:13    To it. Yeah, yeah. Yeah. Exactly. Um, so, uh, but it, it’s, it’s not to say, right, that like the, these differences in, in phenomenology need not imply that, you know, visual processing in, in, in some people’s fundamentally different. I don’t, I don’t think that’s the case. Um, yeah.  

Paul    00:25:33    But, okay. Yeah. But so, so would the conclusion be that consciousness has a function <laugh>,  

Gary    00:25:41    Oh, I, I, yeah. I think absolutely consciousness has a, a function. Uh, so I, I sort of re reject the zombie thought experiment, right? That sure. You know, I, logically of course, we can think of such a, such a creature, but, uh, I, I, I think someone who is, you know, un un unconscious without the phenomen, like, would not just, would not behave in all the same ways. So, uh, yeah, it’s a, but, but I also, like, on the other hand, uh, I think there’s absolutely such a, one could make the mistake in the other direction and put too much, too much weight on conscious experiences, whereas in reality, a lot of these are, you know, post hoc rationalizations. And the reason we do something isn’t, yeah.  

Paul    00:26:24    So should, should I feel okay about having inter should I be talking to myself more internally or externally or less? What, what should I be doing?  

Gary    00:26:33    Well, okay, so, so one thing that we’ve been really interested in is, um, the relationship between, uh, the relationship between language, uh, both, uh, publicly used language, but also inner speech, um, and kind of aligning our minds, right? So, um, Andy Clark and I have this little essay in Aon about, uh, telepathy, kind of the thought as a thought experiment, right? And, uh, we, we both have this intuition that, you know, there, it doesn’t really make sense that you can take a thought and beam it into someone’s brain, right? And have that thought make sense to the other person. But of course, with language, that’s kind of how it is, right? We, we, we have words, uh, that are supposed to denote some, some sort of thought, and we use them, and then the other person, you know, under understands, uh, those words are meaningful to the other person, uh, to some extent, right?  

Gary    00:27:30    And so, uh, if you can beam words from brain to brain, right? That would sort of work. But that’s not really telepathy, that’s just like fancy texting, right? That’s, that’s not that interesting. Uh, the whole point is of telepathy, of the, the trope of telepathy is that you can take language out of the loop and be able to communicate with someone who doesn’t share your language with an alien, whatever. And so, you know, we argue that we have no reason to think that this would work. Um, but that language may it, it, it could be that that’s the best we have, right? And, uh, that it is what allows us to align our mental states sufficiently, uh, to achieve at least some level of understanding. And so the idea, uh, there’s an experimental angle to this question, which is, uh, if people use more language, are they more aligned?  

Gary    00:28:21    And so we do have some preliminary data showing that people who report using more inner speech are more similar in their responses, uh, to things in their similarity judgments. They are sort of more aligned, uh, which is what you would expect given that language is this categorical system. And so if you are kind of, uh, thinking more categorically, whatever that means, you are more likely to align because kind of, it’s easier to align on categories than on particulars. Um, and, uh, yeah, so we’re, we’re exploring this idea. And so one implication is that, uh, in using more inner speech, um, might cause you to sort of align more with other people, which, you know, if you are trying to be wildly original and express some idea that is, is as new as possible, uh, maybe that’s not what you want. Um, if you want to express an idea in a way that can be understandable to the most people, uh, maybe that is what you want, right then. So it, it depends, <laugh>,  

Paul    00:29:27    How, how would this, uh, if we had brain, brain interfaces, right? I mean, would that just further the, I guess, would there be more particulars that were aligned in that case, or, you know, where we didn’t have to, where we had this telepathy, it’s not telepathy, it’s actual brain to brain, uh, uh, interfaces at some degree, to some degree, right?  

Gary    00:29:48    Yeah. I mean, in this essay in the second half, we sort of explore the idea of, you know, if you just had this kind of as an implant, you know, when you’re, uh, uh, an infant or something, you grow up with it. Um, both, both of us can imagine that, you know, we’re really good at taking in new channels of information and just, uh, uh, making them work. Um, and so there’s lots of work even, you know, with adults on sensory substitution and just people are very flexible in learning to use these new channels of information. But I think it would be a learning process, just like learning language is this, uh, protracted learning process where over time you sort of align with other people and you learn how to use it. And so, I mean, this is pure speculation, I think with, with this sort of, uh, brain to brain interface, it would be sort of like that, right?  

Gary    00:30:36    Just like we learned to, you know, uh, use, um, the expression on someone’s face together with what you’re saying to kind of build a larger, more multimodal meaning. So you can imagine adding to that some channel of, you know, flow of neural information that would help you kind of build rapport and maybe disambiguate certain things, add, uh, details to, to certain things. But the question of, you know, so let’s say, right, you want to beam a thought to me about your experience after watching some kind of movie, right? Your experience of that movie is based on all sorts of other memories and other movies and, you know, your, your likes and dislikes and all of this stuff. And so for that to make sense to me, like, would you in essence, have to right beam all of your memories, right? Because that movie experience is, is contextualized within everything else. And so an isolated, uh, mental state, that’s why we’re like, it wouldn’t really have much meaning outside of the context of, of everything else in your head. Um, and so with language, we can decontextualize it to some extent, but, you know, that’s why it’s hard to communicate about things like feelings because Yeah. They, yeah. They’re, they’re much more embedded. Um, yeah. Yeah.  

Paul    00:31:59    So, but, but if I’m, if I’m describing my experience of a film or something to you, in some sense, it’s nice to use language, which is this low dimensional, abstract thing. Yeah. Because instead of imparting my experience, what I’m doing is I’m allowing you to form your own experience based on my low dimensional description, right? So it’s, it’s so that you can move through your own world, right. And understand it in your own way. And, and, and that way it’s kind of special, kind of beautiful.  

Gary    00:32:26    Yeah. Yeah. Yeah. Yeah. Um, yeah. Absolutely. Uh, and so, um, mark Dingmans, uh, also from a few years back has, has a nice, um, I think it’s published as a chapter, uh, on, on telepathy and, uh, taking a different angle where a common treatment of telepathy treats it as this sort of a, you know, a, a brain dump, right? Like, you know, one person transmits a thought or some series of thoughts to another person, right? Whereas actual communication is this interactive process, and we make meaning together. And there’s also, uh, something that you studied a lot is conversational repair. So, uh, it, it, it’s not a smooth process. So we ask, huh. And we, you know, raise our eyebrows and we realize that what we are saying doesn’t quite make sense given the flow of the conversation. And we backtrack and we say, and that’s an inherent part of kind of meaning making. Um, and, you know, a single channel kind of, uh, uh, direct communication, uh, wouldn’t, wouldn’t work that way. And so, you know, describing a movie, right? You could imagine it as a back and forth, right? And that makes it more of what you’re saying. It’s sort of you, you’re interfacing with the other person’s, uh, background knowledge and experiences, and you tailor the message to to them. Hmm. Yeah. Yeah.  

Paul    00:33:51    Well, uh, we don’t have to perseverate on this, but, um, I keep coming back to this, you know, uh, thought about inner speech and language and, and the relation between language and cognition, and wasn’t it Albert Einstein, you know, we always have to talk about Einstein. He’s a classic example. Wasn’t it him that didn’t like speak until he was 10 or something like that. Like, his language development was super late. Am I wrong about that? I  

Gary    00:34:16    I’ve heard the same story. I don’t know about 10. I feel like it’s one of those, like, uh, you know, words for snow examples Sure. Where every iteration number goes up and up. Yeah. Uh, but yeah, I’ve heard that too, that he was like,  

Paul    00:34:26    He was 50. He was, he was 50 before he started speaking 50. Yeah, yeah, yeah. But, but, uh, but then I thought, well, he’s a good example of someone who I, I suppose, thought very spatially and, um, he’s the, uh, uh, the, uh, the go-to for an example of someone who’s brilliant, right? And yeah, he didn’t, he wasn’t fettered by language <laugh>.  

Gary    00:34:48    Yeah. I mean, natural language is good for certain things, right? And, um, you know, math is not a natural language, right? And so, uh, there is, well, so <laugh>, I think there’s a reason why, um, math is, is difficult, uh, for us, for most of us mm-hmm. <affirmative> and requires formal instruction right there. Um, talk more about it later. But, uh, it’s, it’s, it’s very, I would say it’s very unlike natural language, um, there is a link between spatial imagery, which is different from kind of static visual imagery and math. Uh, and you see this both in, um, actual studies, but also in just self-report of many mathematicians, uh, talking about the, the, the, the, the usefulness of, of spatial imagery, uh, which naively I think to many people outside of math might think, oh, you know, what’s so spatial about? Like, okay, geometry, sure, but what’s so spatial about algebra? But all of these have spatial spatial analogs transformations, and it’s very useful often to think about it spatially. Um, but, um, so I, I, I think one reason why we can be as good at math as, as at least some of us are, uh, some, some humans, uh, is that we, we can, uh, rely on our spatial cognition, which presumably evolved for other things, uh, and can be kind of hijacked, um, for, um, for math. But, um, but yes, it’s, it’s, it’s pretty different from language. Yeah.  

Paul    00:36:30    Yeah. What, uh, total aside, what are, are you Russian? What is your, uh, background?  

Gary    00:36:36    My background? Yeah. Uh, yeah. Yeah. So I, uh, originally from, yeah, the then Soviet Union, uh, from Belarus. Okay.  

Paul    00:36:46    Uh, I’m tr trying to place that slight, slight accent. Slight  

Gary    00:36:49    Accent. Yeah. Yeah. I came when I was 9, 9 10. Um, yeah, whether people can detect it sort of depends on where they’re from. Then I lived in the, in New York, east Coast, so there’s probably a bit of that and some, some of the Russian, yeah.  

Paul    00:37:00    Oh, okay. Yeah. I think you say cognition, like Paul Chik, and I think he has some slovic cognition. Oh, huh.  

Gary    00:37:07    Which  

Paul    00:37:08    Is cognition. I always say cognition. Uh, so I don’t know, maybe I’ll start saying cognition.  

Gary    00:37:12    Cognition,  

Paul    00:37:13    Yeah. Funny. Um, okay. Yeah, these are the important hard-hitting questions. Um, you, uh, you trained under Jay McClelland. Yes. And, uh, he must be delighted about these large language models and their abilities and people’s excitement about trying to look for somewhat symbolic and conceptual rep forms of representation in these models. I don’t know, are you still in touch with him?  

Gary    00:37:38    Occasionally? Yeah. I mean, when I reach out, he’s great at, uh, at responding. And, uh, if I’m in the area, uh, uh, I, I, I, I, I try to see him. My, uh, in-laws are in the Santa Cruz area, so I’m, uh, over there at least be before the pandemic was over there more, more regularly. And, and so we’ve, we’ve met up and, um, but yeah. Yeah, it’s, uh, uh, it’s, it’s, it, it’s fun to see, uh, to see all that’s, that’s been happening, compare that to my grad school experiences. Yeah.  

Paul    00:38:11    Oh, yeah. Yeah. Like, it’s almost like, well, yeah. Um, so you’re, you’re part of a large body of people who are playing with these large language models. And, uh, I don’t know if you’re trying to trick them or what your angle is, but what is, what is your view on, um, on large language models in general? And then I’m curious whether it, the success of these language models have altered your thinking at all about language and cognition?  

Gary    00:38:39    Uh, yes. Uh, well, first, my, my general sense is that of, uh, excitement, uh, and, uh, and I, I’m, I’m kind of, you know, giddy <laugh>, uh, are you, and, uh, well, just because it’s, it’s, it’s, yeah. It’s so cool to see it. Uh, it’s also tinged with some frustration, which probably other, uh, folks in the podcast have also voiced, right? That there is this, you know, predominantly engineering approach, uh, to them. And I think, yeah, right? That’s, uh, it’s, it’s effective, you know, these sort of benchmarks and these, it’s, it’s, it’s been very effective at, um, at, well improving performance. But, uh, it’s come at the cost of, um, efforts into kind of using them to do signs. And I, I think it’s not because we can’t, it’s because, you know, so much effort has been, uh, focused on, you know, having the bolded line in, you know, the tables, uh, you know, beating the benchmark, beating the previous result.  

Gary    00:39:45    Right? Right, right. Yeah. So, um, but yeah, I mean, when I was, uh, in, in grad school, Jay, the, uh, you know, we, we, we used neural nets to gain insight. The focus is to use them as, as scientific tools. Uh, and, you know, they, they were toy models. They couldn’t do anything. And, um, the, the symbolic models at the time, uh, you know, we felt, I felt, didn’t give much insight, but they could, they could do stuff, right? If you want a model of poker, right? There was no connection. This model of poker or even, you know, model of, of driving, not, not really, like if you wanted, you know, a program to drive a car, you have to, uh, you know, do it symbolically, right? And so, uh, it’s now kind of flipped, right? Where the, the models that work really, really well in many contexts, at least, are, uh, not the symbolic ones. So that’s, that’s been an interesting reversal.  

Paul    00:40:40    So there are still people calling for neuros, symbolic ai. I mean, do you think that that is, uh, a past concern now and that we can just solve everything with scale and neural nets?  

Gary    00:40:52    I think, uh, probably not scale. Um, but I think it depends on what question one’s trying to answer, right? So, for example, speaking a little bit, uh, for, for Jay, he, for a long time now, he’s been very interested in math and how people do math. Uh, yeah. And so math is at, at a, at a, at a certain level is obviously symbolic. But his question, and it resonates with the, also with the way that I approach this, although I haven’t been studying methodol, is, okay, we have this neural network of a brain, and somehow we’re able to do something like, you know, symbolic computation, uh, algebra, right? I would say that we’re not very good at it, but of course, like compared to other animals, like we, we are, right? And some of us yeah, are really, really good at it, and we can invent machines that are even better at it, but like we did the inventing.  

Gary    00:41:49    And so we absolutely want to understand how this emerges from a, um, from a neural network. And so understanding the emergence of symbolic or symbolic, like behavior is, is I think, a really important goal. But if you, like, what’s unsatisfying is if you kind of get to start with all the symbols, if you just build it in, right? And so, yeah, you can add a calculator to Chad G P T and have it detect anytime you’re asking a question that involves arithmetic, just plug in the calculator, and it would work much better than trying to learn, uh, you know, math from language. But like, that wouldn’t be that interesting scientifically. So I think if you’re gonna focus on things where our, our cognition is most symbol, like, that’s great as a research question, but I think, you know, you wanna try to understand how that emerges from, um, from neural networks, but not necessarily from just ingesting data, right?  

Gary    00:42:50    Like, uh, to a large extent, I’d say that the, the most symbolic parts of our thinking are those, uh, that are formally taught. And this is a controversial statement. Many people don’t agree with this. I, I, I, I would defend that statement that, you know, things like formal logic, uh, yes, we can do it. Mm-hmm. <affirmative>, we’re not that good at it. It’s stuff that we are taught how to do. And part of that teaching, I think, involves kind of mapping between, you know, the things that we are good at and trying to use those for, for doing these kind of, I would say, less natural operations, right? So, you know, it’s striking, right? A, a a a really simple electronic circuit can do arithmetic much better than our, you know, huge brain, but in yeah, learning how to do arithmetic, right? We learn little algorithms and tricks, uh, and also of course, writing things down on paper to, uh, overcome the limitations of our working memory and so on and, and, and be able to do that despite that. But most of us wouldn’t discover that on our own. Uh, so <laugh> Yeah.  

Paul    00:44:00    Yeah. That’s one of the most useful parts of language I suppose, are are you an extended cognition advocate? Aah. Andy Clark?  

Gary    00:44:09    Uh, I mean, depends. I, I find, yes. Uh, I think in the, in <laugh>, for the most part, yeah, I, I mean, it depends on, well, what, what you mean by, by that. But I think, uh, it’s absolutely the case, yeah. That, you know, we, we, we have incorporated all kinds of tools, uh, language being, being one of them, uh, into just our, um, our kind of typical environment. Yeah. Um,  

Paul    00:44:39    He used the word, um, oh, sorry.  

Gary    00:44:41    Mm-hmm. <affirmative>, go, go, go ahead.  

Paul    00:44:43    I was, I was gonna say, um, you used the word, um, emergence a few times when talking about symbols. Do you, what do you see as the relationship between a symbol and the sub symbolic entities that, uh, give rise to that emergent property? I mean, is a symbol an emergent property of sub symbolic processes, in a sense? Everything it is, er, is an emergent property, but is that how you think of, uh, a symbol, or do you think of it as an an abstract concrete entity?  

Gary    00:45:12    I, I think of it as, as an emergent sort of chunk. So, uh, it’s been really interesting. So you asked earlier about whether these large language models have kind of changed how I think of that, uh, anything in language. So one, one specific thing, and then I’ll circle back to the symbol point is, um, sure. Seeing just how much perceptual information seems to be embedded in language so that, of course, these language models, um, are, they’re, their only exposure to the world is through language, uh, the pure language models. And yet they come to, uh, have all this knowledge about what things look like, um, you know, spaces, right? Yeah. Navigating through spaces. And it’s, it’s crazy. It’s, it’s not hard to find, uh, gaps, but that they should know anything at all, uh, is remarkable, right? And it’s one thing if it’s just repeating something that’s heard, right?  

Gary    00:46:10    But that’s not, I think what’s happening. And there is a very strong analogy here to, um, you know, for example, um, marina Bethany’s work on how much, uh, blind people, congenitally blind people know about the visual world. Um, we have somewhat different takes on sort of the, the role of language, uh, in this process, you know, where she kind of, so it’s clear, I think both of us, it’s coming from language, but she puts more emphasis on the sort of inferential processes, uh, than the kind of statistical learning from statistical cours. And so, putting aside the question of whether humans learn all this perceptual stuff from language itself, the, what the models make clear is that the information is out there, uh, that it can be in principle, uh, learned from, from language. And so it’s, uh, and I think in many cases we do in fact learn a lot of it from language for blind people.  

Gary    00:47:06    Learning about the visual world through language is, is one example. But of course, so much of the visual world that we, you know, we can experience, but we don’t, and yet we can talk about it. We, we have, I would say, real knowledge about, uh, things that we’ve never personally seen and, and we, we learn, um, that through language. And so it kind of, I think, uh, changed my expectation of just how much of this type of information is embedded in language. Um, so, so yeah. So, so that’s kind of changed. I, I think, uh, it’s changed my view, uh, on embodiment a bit, uh, that, you know, I, okay, I, I think prior to this, I, I would’ve thought of it as being more central than it is. And there is the question of course of, okay, you know, obviously the language that these models are trained on comes from humans, uh, who are embodied.  

Gary    00:48:05    And yeah, as a result, arguably infuse their language with all of this information about, uh, appearances, about, um, you know, moving through the world about space. Uh, and had they not been embodied, like, okay, of course, it, it, the information in language would be different, but that’s also true of people learning language, right? So people are learning language from others. And so, uh, you know, it’ll be different if we found that, you know, models producing language and learning from language produced by other models, right? Uh, converge on the same thing, but, uh, the, the, the, the point that, that, oh, it’s because humans are producing language and that’s why the models can succeed, is, is to me, a, a kind of a sterile point because it’s humans are learning language and are learning from language that is produced by other humans as well. So it’s mm-hmm. <affirmative>, I don’t know if that, if that, uh, totally makes sense. But it, uh, so, um, the idea that, oh, we personally right, must be embodied for this. And that bit of language to make sense, I think is, uh, in some ways challenged by, by the success of large language models. Um,  

Paul    00:49:27    Well, okay, so you just used the, the term make sense, which, you know, relates to the idea of meaning. And yeah, I mean, is is one way to think of language, like, is <laugh>, so it’s its own level of abstraction and connection, and you can learn all the structure and in the world, but are these language models missing? Meaning, and it is embodiment and grounding important for meaning? Uh, of course. Well, you know, because the, the language model, it perhaps it’s only great at, um, the statistical regularities right, of language, and that, that, that that’s its own structure, but then the connection to meaning is, is through embodiment or grounding.  

Gary    00:50:09    Yeah. Yeah. So, so this is, you know, obviously contentious issue, and it depends on what one means by meaning. Um, so of course, it, it always struck me as strange, uh, to think that, okay, if you have no grounding, uh, to the external world, you know, you can’t have any meaning at all. But then, okay, uh, how much grounding do you need? Like if you put in a little bit <laugh>, now suddenly everything is grounded out, and then meaning just magically appears. That doesn’t seem right. Um, and, uh, you know, then I also find myself reflecting on how people actually use language and how much of language is pretty abstract and not about concrete things Yeah. In the world. And so many of the words we use, right? The, the way that we tend to use them, they might have certain more literal meanings, but the way that we tend to use them really are about the more abstract, uh, aspects that are about the words relationship to other words.  

Gary    00:51:15    Uh, in fact, so Mark Brisbo and his colleagues who collected the large scale data on concreteness judgments that many people, uh, use, so thousand tens of thousands of words, you know, rate this word on a concrete to abstract scale. Uh, here’s a, a, a little, I think a insufficiently widely known fact about those ratings, uh, which is that the way that they defined whether a word is concrete or abstract, is that concrete words are things that, um, words denoting things that you could point to or enact, right? So jump is fairly concrete in that, you know, you could, if you, if you couldn’t use the word jump, you could just like jump around and, uh, to convey the idea of jumping. Um, so that’s the one end of the scale. And then abstract, they defined as words meanings of which you would, uh, need to describe with other words, right?  

Gary    00:52:13    Uh, and so the reason this is important is that when you actually look at the distribution of words that people use, it, it skews pretty heavily on the abstract end. So these are words, oh, is that right? That, yeah, that, um, you know, and obviously if in child directed speech, it’s, it’s a bit different, it’s more concrete. Um, but, uh, yeah, if you, we, we did these few of these little exercises where you, you know, you take a passage of text and you remove the most concrete words from it, and compare that to removing the most, uh, abstract words from it, and removing the most concrete words, remove some details, removing the most abstract words, even just the focusing on content words, not like the, uh, right. You just have no idea what this is about anymore. Um, so really, so much of meaning is this, in these abstract words, which, you know, th those meanings are the relations between words. Um, so yeah, I, I, uh, I have no personally no trouble in saying that there is a huge amount of real meaning just in those connections. Um, and, uh, yeah, de you know, once you, yeah, yeah. Uh, it  

Paul    00:53:25    Depends on what you mean by meaning. I suppose <laugh>, it all comes back to that <laugh>.  

Gary    00:53:30    Yeah. Yeah. And I think, you know, the way that, um, for example, language is often studied, let’s say, word learning in kids. I think reference is overemphasized in part because like, it just makes it easier to run the experiments, right? So you study how kids learn, you know, in a lab, uh, a word for some novel object, right? So you have some concrete object, and it’s, it’s not, because most researchers, I think, uh, language development researchers think that that’s all there is, it’s just that it’s more tractable, um, you know, than to study the more abstract, uh, parts of, of, of word meaning, uh, and how, you know, words become associated with one another. Um, but I think that that among, you know, other things led to an overemphasis on, you know, concrete reference, right? As a key part of meaning.  

Paul    00:54:27    Do, do you think that language models have anything to say about the way that language develops in children? Uh, because they, they learn so differently. And then, so I was talking, you know, I, I had Ellie Pavlo on recently, and some of her work, um, is, uh, her conclusion from some of her work is that it’s, it’s possible that these large language models could kind of learn backwards from humans, right? Where they get the grounding later, I don’t know how much grounding Yeah, they, they would need, like you’re alluding to, but basically like, learn perfect language, whatever that is, and then get the grounding later. But, but do you see any way to tie language model, quote unquote learning to the way that humans learn?  

Gary    00:55:08    Yeah. So that actually makes a lot of sense to me that, that there, there is a lot of work, uh, showing that kids learn Yeah. That there is a big concrete disadvantage, right? That, uh, children’s early words are more, they tend to be nouns, they tend to be concrete nouns. When they start learning verbs, they tend to be the more concrete verbs. So, so there is lots of evidence pointing to the importance of grounding. I don’t know if it’s really about grounding or about, uh, kind of what kids use language for, right? Which is  

Paul    00:55:41    For forgetting what they want,  

Gary    00:55:42    <laugh> getting what they want. Exactly. And so, you know, they don’t need to talk about this sort of stuff, right? Mm-hmm. <affirmative>, uh, that we’re talking about now, right? They’re, it’s, it’s for, um, getting what they want, getting attention, right? Getting, you know, and, uh, <laugh>. So, um, yeah. Yeah. And, and so in a sense, I agree on this with Alida, it’s a kind of a backwards process. Um, yeah, it would be interesting to know whether, um, in these language models, I feel like there’s probably work on this. Um, there is also a concreteness advantage. So even though there’s no, like, even if there’s no reference to the real world, right? More concrete words tend to be used in, uh, more predictable contexts, right? They’re, they have narrower meanings, and so they’re easier to predict. And so if you are just learning through prediction, there’s, there’s reason to think that that would be easier, um, to learn.  

Gary    00:56:44    Um, but, uh, uh, as far as parallel, so yeah, I, I mean there’s a, it’s, it’s kind of become a trope that like, oh, kids don’t learn language in anything like the way that models learn. I mean, um, I think when it comes to language use, that’s, that’s true. But, um, I think kids are predictive learners. They also, uh, you know, are learning from examples and generalizing from examples, and a lot of kids’ early language is, uh, very stereotyped, and it’s not as creative and productive as is often made out. Uh, and when you know the child’s environment, like, you know, when they say something new, I mean, when they start going to school and they’re learning from other kids and stuff, it, it, it’s all <laugh>. You, you lose track of it. But like, how old are early? How old, early Your kids, you kinda early?  

Gary    00:57:34    I know you have my, sorry. Yeah, yeah, yeah. Uh, we have two boys. One is, uh, turning three, uh, and a couple months. One is, uh, five and a half. So in kindergarten, uh, yeah. Wow. Okay. Yeah. So it’s, it’s, it’s been fun to watch. And, uh, the, but I was gonna say that earlier on, when they’re saying fewer different things, right? You, you can often kind of track where they got something from, right? It’s like, oh yeah, that phrase, you know, that’s clearly from this TV show, right? And they’re only using this word within this phrase, and then, you know, a month later, right, it branches out. And now, you know, it’s being used in a more expanded context, but it’s not like they learn this word, and from the beginning they have some, you know, full knowledge of how the word is used, and it’s really brittle and narrow.  

Gary    00:58:26    And then often, you know, if you ask them, uh, right? Like often they use words without full understanding of what they’re saying. <laugh>. So do I, that’s a thing. Yeah. Yeah, exactly. So do we, uh, but we, we probably have a higher threshold for it. Like, we have some social monitoring, right? Like, we don’t wanna be called out or anything. Kids don’t have it as much. Yeah, yeah. Right. Um, so, but, but, um, I wanted to just briefly return, so the emerging symbols. So I think what’s been so interesting for me, um, in watching these language models is just the effectiveness of prediction, right? And, uh, I wouldn’t wanna say that like, that’s all there is to human learning, but I think that that’s a lot of what human learning is about. And, uh, I think it means something to find that, hey, these, this, this process of predicting works really, really well.  

Gary    00:59:23    Right? Does that mean same thing with reinforcement learning, right? Does that mean that there is something, you know, that nature is trying to tell us something, right? If it works really well across a bunch of contexts, right? Uh, could it be that it works really well because it’s a really good way of solving a bunch of problems, and biological evolution is likely to have, uh, found a similar solution. And so when it comes to language learning, of course, predicting the next word or the surrounding words is generally not the point. But it turns out that it’s a really good way of learning structure. And so, uh, if you start being able to predict things really well, it probably means you’ve learned some useful internal model, not just of language, but to some extent of the world. And, uh, if it turns out that, uh, symbols are a useful part of that model, well, you are learning symbols. Um, and so I, yeah. That, that’s kind of how, so yeah, this, this idea that like, oh, it’s just auto complete, therefore it can’t have meaning. I feel like it, it sort of misses the point because the, the, the goal is not predicting that’s just the loss function. Um, but in trying to predict, that’s just a really good way of learning underlying structure. And if you can predict really, really well in lots of other contexts, we call that some degree of understanding, right? So, you know, uh, even if it’s hard to talk about.  

Paul    01:01:06    Yeah. So, so you don’t think that <laugh>, sorry, this is a, could be a, uh, off the wall question, but you don’t think life is necessary for, um, understanding meaning, et cetera?  

Gary    01:01:21    I don’t know. It’s, it, yeah. It’s, it’s kind of <laugh> that was it above my pay grade? I don’t know, overly philosophical come  

Paul    01:01:29    Down on one, one side or the other man  

Gary    01:01:31    <laugh>. Yeah. Uh, so, you know, so far these models are not actual, uh, actors agents, right? Uh, they, they respond to our prompts, right? They’re not actually creating anything on their own, uh, and people are. And so, uh, right. So I don’t know. I mean, I, I find myself all often landing on the side of, like, I, I think humans understand far less than we give ourselves credit for, right? And there’s way less understanding than us. And so we’re probably overestimating our own understanding and underestimating, like, understanding in, in these sorts of models. Um, and so y you know, it, so I, I think the models are learning certain, um, world, sort of world models, right? In the same, some of the same ways that we are learning world models, but we have lots of goals, and we are kind of agents in the world, and we effect change right? In, in a way that these models don’t. And so, um, like they don’t, we cre we sort of make stuff, we make meaning, right? In a way that these models don’t, right? They’re just responding to own prompts. Um, and if you get, you know, two models talking to one another, right? It’s, it’s not very deep, uh, right? Like it’s <laugh>, it, it, it, it, it, it, it becomes kind of circular. Um, of course, lots of our conversations, human to human conversations sure. Aren’t very deep either. So, yeah.  

Paul    01:03:21    <laugh>, speaking of circular, um, so this is something that I also asked Ellie, uh, the other day. So you have these language models. They’re generating a bunch of text. They’re trained on the Cora of web text, uh, text from the worldwide web. But then, so they’re generating this text, that text is gonna go onto the web, and future models are gonna be trained by the text that they generate. What effect does that have on, um, us as a society, but also on the nature of the way that language changes and evolves throughout time?  

Gary    01:03:52    Yeah. Yeah. I, I’ve, I’ve thought about it. Uh, I don’t have an answer, right? It seems like it would just kind of create this circularity and probably amplify existing biases. Um, I think there’s a fun, right? It, it’s fundamentally different from, you know, uh, a chess or go model playing itself, right? Because that’s a closed world with a definite kind of reward function. And so you can learn from exploring this world, um, and, and, and reach new insights. If you’re just using language without any connection to anything, I, I don’t see how you can, uh, go beyond just kind of reinforcing, uh, kind of existing biases. Um, but if you now combine these models with, uh, other sources of data, right? Um, you know, now maybe you can kind of break out of that circularity, right? And so, um, right. I mean, one can flip that question around and ask, okay, well, how is that, that humans are not, um, running into that same problem because we are learning language from other people, and then we are producing, uh, the, the training set. Uh, and presumably it’s because we’re also interacting with the larger world, having different experiences and sharing those experiences with one another, right? Inventing new technologies, uh, interacting with those technologies and, uh, you know, uh, disseminating that new stuff to other people. And so we’re not just kind of rehashing the same thing over and over and over again.  

Paul    01:05:42    Yeah. Like if I, if I go, I don’t know, take up surfing or something, I’m gonna learn new vocabulary, but I also might create a new word for, uh, when the water is choppy and it’s cold and I had a bad run or something, you know, I might call that a Blu or something. Yeah. And yeah, so a a language model, I, I suppose you could, you could build in that generative neologism, um, objective into the language model, but then, man, that’d be scary if if you, they would, it seems like they would run wild and, and our language would change really quickly. Yeah.  

Gary    01:06:14    <laugh>. Yeah. Yeah. Yeah. That’s a really interesting idea. Uh, so I’ve been very, um, intrigued by electrical innovation language change. So, um, one way to think about, right? So, oh, you know, we invent words for kind of things that we already know, uh, uh, and, uh, I, I, I don’t think that’s right. So that could be true for the, you know, if someone invents a word, it presumably reflects some idea that they’ve already had and, and, you know, that they find useful, uh, to have a word for. But in general, the words that we encounter are not things that we necessarily already had, uh, a concept for. Uh, we learn the concept, many concepts, I think, in the course of learning those words, right? And so if that word happens to become a kind of part of our core vocabulary that every speaker has to know, then you have to learn those words just in the process of learning the language.  

Gary    01:07:12    And so you as a speech community end up aligning on those concepts. Um, and so, yeah, so I could imagine, you know, the, these models through kind of being exposed to way more text than any individual human right, can kind of build a new chunk and say, okay, well, here’s some useful regu, some regularity that is frequent, but doesn’t have a word for, is not ized. And, you know, here I’m gonna invent this word and then use it in context. And people, uh, many words we learn just passively by kind of reading and, and encountering them in context, right? And so now, you know, people are exposed to this thing. So that could be kind of cool. Um, I, I hadn’t thought of that before. That’s, that’s an interesting, uh, kind of influence on, yeah. On Lexi legalization language change.  

Paul    01:08:05    What is a concept and how do you think about it relative to a symbol?  

Gary    01:08:11    Um,  

Paul    01:08:13    So I know, I know a lot of your work is, you know, focuses on whether what you’re, like, what you were just alluding to. I mean, you’ve done research on this, what, you know, the, the back and forth between concepts and language and the idea that you espouse is that, um, we don’t have these inherent, innate concepts and then just map our words onto them, that there’s it, it’s more of an, um, interaction.  

Gary    01:08:37    Yeah. Yeah. Um, so I, I, I think of concepts, and I should caveat it by recognizing that there are different, um, different meanings of concept as used by psychology and, and philosophy. And, and so I’m talking about concepts as sort of internal mental entities. Um, so I think of them as just mental representations of categories, uh, right? And so they’re not, and again, with symbol, different people use symbol differently, right? One distinction between how at least some people use symbol and how I use concept is, um, a concept isn’t a, you know, uh, uh, a a singular kind of undifferentiated node, right? So it’s perfectly sensible for me to say, okay, you know, you have a concept of a dog, but then some dogs are, you know, typical dogs and other dogs are kind of atypical dogs. And, you know, some dogs look more similar to cats and other dogs, you know, look very different from cats, right?  

Gary    01:09:38    So it’s not, uh, like, it, it doesn’t require some single node that is the dog, right? Mm-hmm. <affirmative> dog, you know, in, in all caps. Um, but there is a sense in which we can have a thought about dogs that is somewhat distinct from thoughts about a specific dog, right? And so, and that, that’s something that language is really good at instantiating. So even if a speaker has a specific dog in, you know, they’re thinking of a specific dog, they use the word dog, right? And it activates a representation in, in the hero of a dog that is more abstracted and more categorical, uh, than what would be activated by, for example, seeing a specific dog. And so that’s kind of, so I, I think of language as ha as being sort of like a, a cue or an operator, right? For instantiating a mental state, uh, that in some cases maybe we’d be able to instantiate that mental state even without language. But I think often, uh, we, we wouldn’t, even if we could in practice, we would not. Um, right? And, um, and in the course of learning a word for something we are learning, and then also in using the word regularly, we’re learning to instantiate that more categorical mental state that is one, not kind of as focused on a specific experience, a specific exemplar, uh, specific yeah. Situation.  

Paul    01:11:03    Have, um, sorry, I’m kinda jumping around here, but I have a couple more kind of open-ended broad questions, and then in the last few minutes I have some, uh, questions from Patreon listeners that I’ll ask you. Um, has, how was I, how was I gonna phrase this? Has the advent of like these large language models changed the way you view language in terms of how special it is, the hi, you know, within the hierarchy of awesome human cognitive abilities, is language still up there? Or, you know, for me, like seeing these language models, it’s like, oh, okay, it justifies that language is not that great.  

Gary    01:11:42    <laugh>. Oh, yeah. I, I, so I have the opposite actually, uh, kind takeaway, which is it, um, kind of vindicates the idea that there is something really, uh, well, central <laugh>, I think to language. So for example, you know, it’s, it’s been really cool to see the, the generative art models, right? And mm-hmm. <affirmative>, uh, that, that’s awesome. Like, uh, I think they probably have a whole lot to teach us about actually the, the nature of, you know, visual information. Um, but, uh, it’s not a coincidence. I think that, that they’re using language as an interface, right? So, you know, you’re training them on, on these, um, captions and, and images. And one could say, okay, well, of course, you know, we want them to use language because if we want people to use them, right? Kind of for them to be useful, like, it’s helpful to be able to use language to prompt them to produce certain kinds of visual, uh, visual outputs, right?  

Gary    01:12:47    But, but language is, I think, actually a much more central part of these models, right? Because, you know, if I want to generate, um, you know, a, a, uh, a, a piran riding on a unicycle, okay? And, uh, this is something that good one we can do <laugh>, uh, and, uh, well, where did they get that concept of, you know, piran riding unicycle, right? That doesn’t come purely from the visual information. I’d argue that you can’t get it from the visual information alone. Um, it comes from training on vision and language. And so language is not just being used as an interface. Language is actually what is in these models, creating a lot of these categories that we can then deploy, right? And, and use. Um, it’s, it’s incredible actually to that, you know, the sense of like a fish riding a unicycle and a horse riding a unicycle.  

Gary    01:13:41    It’s, it’s a different type of riding, right? Visually, and it’s really cool that it often gets that right <laugh>, uh, and that’s, I think, kind of unexpected. But, uh, but that sort of verb ride, right? Like, that’s not in the visual world, right? That’s a, a, a word meaning. And, um, the models learned it to some extent because it’s there, right? And were it not for language, like how do you get writing out of the visual data? Like, I don’t know. I don’t think there is a way. Um, so yeah. So it’s, it’s kind of showing that language is, uh, this actually much more central way of organizing information than, um, you know, than one might think. Um, and I, I don’t know. I, I don’t think the in, so for example, I, with the older kind of just the, the regular supervised vision classification models, right?  

Gary    01:14:41    Trained on ImageNet or whatever, and, um, mm-hmm. <affirmative>, uh, you know, you show an image and it tells you that it’s a dog or whatever. Uh, in my discussions with, with folks, um, working with these models, I got the sense that they weren’t thinking of the language as playing any role. That these are not in any sense language models, right? They’re just vision models. And of course, to be useful, you wanted to output a verbal label, but that label had nothing to do with language. And I mean, I, I think that’s, that’s not really right because, uh, in that supervised learning process, right, you’re telling the model that, uh, whatever, you know, horses or dogs or whatever, like, are a thing. And that’s a lot of what language is telling us also, right? That, you know, we’re not for the encounter with these words. It might not occur to you to treat all examples of this thing as having anything in common. Um, and so this guides your learning and that supervision signal is telling you, you still have to figure out what they all have in common. You still have to do the learning. The model still has to do that learning, but it’s at least telling you that, okay, treat all of these as being the same sort of thing. Uh, you know, and here’s the signal telling you that, like, which things should be treated as the same? And I think that’s a lot of the role that language is playing, uh, natural language in human  

Paul    01:16:00    Learning. But is there any detriment to that? Is there anybody any, I’m sorry, I’m pushing you on this. Like, um, yeah, yeah. So, you know, I’m, I’m kind of playing devil’s advocate, but you know, by throwing out, by, by only looking at the commonalities and calling everything, all dogs, dogs, you know, or whatever, is there any detriment in terms of cognition?  

Gary    01:16:19    Yeah, for sure, for sure. So, um, I mean, I think you see that in, um, sort of the domain of social categories, right? So we learn, um, gender also, right? So we learn these labels and, um, we sort of tend to then automatically classify people, right? As members of that category. Yeah. Uh, right. And, you know, that has consequences for, for treat treating people, right? It doesn’t mean you can’t then treat them as, you know, uh, or, or also represent them as an individual, but there’s a consequence individuals to initially classifying them. Uh, so you see that in, um, it’s been called the other race effect, right? That it’s harder for people to recognize, uh, individuals of the other races. Actually, it’s not, it’s more accurately called the minority race effect because, um, oh, it’s, so, so the idea is that, you know, oh, in, in the us let’s say, you know, white faces are a majority group, and so whether you yourself are white or black, you know that the white faces are the majority group.  

Gary    01:17:20    So if you see a white person, you’re less likely to classify them as a, as a white person, because that’s just the kind of the default category. And so what’s the alternative is to kind of represent them in a, as, as an individual, you see a member of minority group, right? If the first kind of classification is that they are a member of this group, it becomes harder to represent the, the individual details. Um, and when the category gender category also becomes salient, uh, it doesn’t mean that you’re only representing that person as a male or female, but it means that you, you’re representing them as, as a male who is, you know, and connecting it to other details. And so there are consequences absolutely. Uh, of yeah. And often negative consequences of, of this sort of categorization.  

Paul    01:18:07    Okay? So you’re giddy about the large language models. <laugh>, you’re impressed. And, um, but I, but I think I’ve heard you say that you’re not giddy about the idea of artificial general intelligence. So, um, if, if I have that right, maybe, um, yeah. Well, do I have that right? And if so, um, why not if you’re so giddy about these large language models, aren’t you? Aren’t we just one step away now from agi?  

Gary    01:18:32    Well, so, uh, I, I find the talk of AGI to be kind of uninformed because it assumes a certain, I think, shallow view of what intelligence is, right? That, uh, first of all, they’re sort of like, okay, you know, humans are generally intelligent. And so, you know, you know, when will these AI models be generally intelligent? And then this idea that like, well, there’s nothing special about the level of human intelligence. And so once they’re on that track, right, uh, you know, one step below human intelligence, uh, then it’s just a matter of days before they’re at human intelligence and exceeding human intelligence. And I think it, that reasoning makes sense when apply to some, uh, close class domain, like playing a particular game, right? It’s true that there’s nothing presumably special about the human level of go or chess playing. And once you’re good enough, you can quickly exceed it.  

Gary    01:19:26    But I think that logic doesn’t apply to actual natural intelligence, right? Because like, you know, we, we don’t have a good way of, um, well, we, some people think they understand human intelligence. I, I I think they’re wrong and have been misled by a kind of reliance on, uh, intelligence testing and IQ tests, right? Which, you know, at their best, right? Maybe can be good at measuring, right? Certainly culturally valued, uh, skills, um, and ways of thinking, but there’s nothing general about it. Um, right? It’s, it’s just about like, sure. IQ tests in many contexts are predictive of, um, you know, success in certain types of jobs and so on, because like, we value certain skills, and so we design tests to test those skills. But what is general, I, I, I don’t think there’s anything general about it. Um, and so, uh, I I, I feel like the discussions of AGI are often predicated on the kind of shallow view of human intelligence.  

Gary    01:20:32    Um hmm. And so, you know, I I also, I I like to kind of give, you know, think in terms of thought experiment, right? Where okay, you know, if you think that there’s certain something about the human brain that allow, gives us general intelligence or, uh, well, you know, go back 50,000 years, right? You know, if you observe people, you know, they would not do well on an IQ test, uh, right? And so they have the same brains, obviously, the cultural technologies are different, you know, they are way better at certain things that now we don’t care about. Uh, way worse at other things. You know, they’re, they’re not doing math, they’re not doing science, right? Um, the hardware is the same, right? And so would you then come, would you then conclude that they’re not generally intelligent, right? And, and it just, the, the, the whole kind of thing doesn’t, doesn’t really, uh, make much sense.  

Gary    01:21:31    <laugh> make much sense to me. Uh, and, uh, and, and then the thing about like, so I think these large language models, right, are showing that, yeah, you can learn, for example, to produce all these grammatical sentences from, from just input and for, so they have a lot to teach us about, like what is needed and perhaps not needed, uh, to say learn language, but so far, right, they’re not doing things in the world, uh, the way that animals and, uh, you know, human and non-human animals are. And it’s not clear how good they are at doing things in the world. So I think we’re not even at the, not only are we not kind of at the top of that, that curve, uh, we’re not even like really on it yet. <laugh> uh, <laugh>. Yeah.  

Paul    01:22:20    But I, I like that you, um, highlighted the distinction between humans 50,000 years ago, let’s say, and today, um, because it, so I, I, I agree with you. I, I don’t bind to agi or, or generality in general, but what it does highlight for me, and what I’ve been enjoying thinking about recently is the vast capacity for that range of cognition that our brains in Dallas with I, I’ll say our brains. Yeah. You know, I guess that could be controversial for some people, but it’s, it’s the capacity that is, um, more impressive to me than the actual doing of any one given kind of task or range of tasks. Um, but, but I don’t know if that makes sense to you or if you agree with that.  

Gary    01:23:05    Yeah, yeah. Uh, no, it, it, it, it is amazing. Um, I wonder, right, um, kind of continuing the thought experiment, uh, if you had to make a prediction based on the behaviors of people 50,000 years ago, right? Would these animals whi which are anatomically modern humans, right? Would they, you know, go to the moon and, you know, be doing modern physics and all this stuff, right? Like, what reason would someone have to, to make that prediction, you know, confidently, right? Um, right, right. Uh, and, uh, you know, I think 50,000 years ago, like humans are on a very different trajectory from other animals, right? Um, but, but based on the level of technology and kind of actual, uh, you know, uh, uh, scientific achievements, right? Not, you know, totally different from now. So I, I don’t know. I think, uh, I find kind of, yeah, the discussions of the intelligence within the context of artificial general intelligence to be kind of very narrow and focused on not just the humans of the present, but particular kinds of humans of the present, right? And so valuing very specific types of intelligence and not others, and yeah,  

Paul    01:24:26    This guy, this guy, yeah. <laugh>. Um, so I, uh, I, uh, so I’ll, I’ll end, um, before I ask you a few like specific Paton questions, I, I, uh, began with Ellie Pavlik asking her if I had to freeze her for how long she would like me to freeze her, and then thaw her out to contin to then like wake up and continue her research career. And that kind of morphed into, um, her, so seven and a half years, because she said fi in five to 10 years, she feels like we’ll have a good understanding of how language models work. Um, where do you, do you think it’s, is that, is that, um, too long of an estimate or, you know, do you think that we’ll understand how language models work in the near future far future? How long would you like to be frozen?  

Gary    01:25:13    Um, I, I mean, I, I, I don’t think that’s actually, uh, that’s not my primary goal to understand Lang how language models work. I, I think that time estimate seems plausible to me. Um, but I’m not sure that understanding how they work, I think it’ll be very satisfying. I don’t know that it would, it, it would change much. Um, so let’s focus on, let’s say  

Paul    01:25:41    We have, let’s for your own, for your own career and goals. How long would you, yeah, like if I, if you had to be frozen and then wake up and continue asking the questions that you’re asking and, uh, you know,  

Gary    01:25:53    Yeah, yeah. Um, I’d be cur okay. I I’ll say a hundred years, a hundred years.  

Paul    01:26:00    Oh, I like that. All  

Gary    01:26:02    Right. Yeah, <laugh>. Yeah. So I think that’ll give opportunity to see how language has changed in response to all of this technology. Oh, um, I’m really curious actually about, um, the use of these language models. Like some uses are, strike me as kind of like time saving, time saving, but uninteresting. Like, okay, you can write emails faster, like, okay, eh mm-hmm. <affirmative>, you know, other uses are, are sort of much more interesting. So for example, uh, one of the remarkable aspects of these models, uh, that I I didn’t see coming is that just, okay, so you’ve been trained on this enormous, enormous corpus of data, but you can access very specific things in this precise and kind of, I don’t know, uncanny way, right? So, uh, use of these models for indexing information. And so I could imagine people using them more for, uh, basically feeding in their individual information.  

Gary    01:27:06    So people have been playing around with this feeding in their, you know, journal entries to these language models, their notes, right? And then having the model use that information, um, right when person indexes it, right? So it’s not just indexing kind of knowledge at large, it’s indexing your own personal knowledge. And so you can imagine interfacing, um, this with, you know, people’s, um, right? So external memories, so photos, recordings, um, and, and having that be a much more integral part of our workflow. Um, uh, and, and that could really transform, right? Like, I’m thinking of education, you know what it means now, like even now, right? With all the technology and the students, college students, you know, they’re taking notes, right? If they’re typing notes, okay, it’s easier to search through those notes, but you have to come up with your own, like, ways of organizing. So, but that could all be presumably even with current, current technology, automated, right? And index probably in a more effective way than what we do as individuals. Um, and so having all of this available to us and kind of integrated, um, you know, I, I could see that feeding back in and kind of making us, I don’t know, smarter, you know, in certain ways. Um, so that could be cool to see how that develops, uh, over the next century. So a hundred years,  

Paul    01:28:26    I, I would kind of, I would kind of be, uh, hesitant or scared a hundred years is things could be so dramatically different that I would worry that I wouldn’t be able to function in society <laugh>. But I like it.  

Gary    01:28:38    Yeah. Yeah. I think, honestly, I mean, if, if, uh, if <laugh>, if the past, uh, uh, if you look at how people get the future wrong, right? Uh, yeah, yeah. In predicting the future, it, I, I, I think there is a systematic bias, right? Like people actually <laugh> people’s predictions of technology. Some of them are pretty good, but people tend to really underestimate social change, right? So you have like, you know, Victorian era predictions of like, okay, flying things, we have flying things, okay. But then it’s like everyone is still dressed the same way, and there are like traditional gender roles, you know, that that has not changed, but now, like, the mailman flies around, right? <laugh>, like  

Paul    01:29:23    <laugh>.  

Gary    01:29:24    So, uh, uh, so I think the, the, the, the hardest things to adjust to will probably be the social things.  

Paul    01:29:32    Gary, thank you so much for dancing around so many of these topics with me. Before I let you go, um, you, you, you have a kind of a psychedelic, I can tell it’s a brain on your shirt. Uh, it looks like a cool shirt. Can I see the, what, what, what is the shirt? What  

Gary    01:29:45    Is that? What is the shirt? Just brain. Um, uh, it’s a, it’s a, it’s a figure one brain map. So there used to be this, uh, site called wt. Uh, it was bought up by Amazon at some point, and, uh, they kind of disassembled it, but, uh, they had, um, so I have a whole lot of t-shirts from it. They sold t-shirts, uh, it was like people submitted designs and then everyone voted, uh, and the top three designs got printed, and then they sold the t-shirts. So this was someone’s, oh, someone’s design. Um, so, but, uh, they’ve, they haven’t been around for a long time. And, uh, all of these shirts, uh, I think will soon have to be retired, cuz Yeah. They’re not being replaced. <laugh>. Oh, it’s, I,  

Paul    01:30:32    It’s a, it’s a  

Gary    01:30:33    Cool shirt. Yeah. Different ones. Yeah. Thanks. Yeah, <laugh>.  

Paul    01:30:37    All right. Well, thank, thanks for, uh, again, for your time. Uh, appreciate it and good luck, uh, moving forward.  

Gary    01:30:42    Yeah, yeah. This was fun. Thanks.  

Paul    01:31:00    I alone produce brain inspired. If you value this podcast, consider supporting it through Patreon to access full versions of all the episodes and to join our Discord community. Or if you wanna learn more about the intersection of neuroscience and ai, consider signing up for my online course, neuro Ai, the quest to explain intelligence. Go to brandin inspired.co. To learn more, to get in touch with me, email Paul brand inspired.co. You’re hearing music by the new year. Find them@thenewyear.net. Thank you. Thank you for your support. See you next time.