Brain Inspired
Brain Inspired
BI 163 Ellie Pavlick: The Mind of a Language Model
Loading
/

Support the show to get full episodes and join the Discord community.

Check out my free video series about what’s missing in AI and Neuroscience

Ellie Pavlick runs her Language Understanding and Representation Lab at Brown University, where she studies lots of topics related to language. In AI, large language models, sometimes called foundation models, are all the rage these days, with their ability to generate convincing language, although they still make plenty of mistakes. One of the things Ellie is interested in is how these models work, what kinds of representations are being generated in them to produce the language they produce. So we discuss how she’s going about studying these models. For example, probing them to see whether something symbolic-like might be implemented in the models, even though they are the deep learning neural network type, which aren’t suppose to be able to work in a symbol-like manner. We also discuss whether grounding is required for language understanding – that is, whether a model that produces language well needs to connect with the real world to actually understand the text it generates. We talk about what language is for, the current limitations of large language models, how the models compare to humans, and a lot more.

0:00 – Intro
2:34 – Will LLMs make us dumb?
9:01 – Evolution of language
17:10 – Changing views on language
22:39 – Semantics, grounding, meaning
37:40 – LLMs, humans, and prediction
41:19 – How to evaluate LLMs
51:08 – Structure, semantics, and symbols in models
1:00:08 – Dimensionality
1:02:08 – Limitations of LLMs
1:07:47 – What do linguists think?
1:14:23 – What is language for?

Transcript

Ellie    00:00:03    Are these human-like, like people just have opinions. People are like, definitely yes. Right? And people are like, definitely no. And basically it’s an unanswerable question. Like anyone who’s a scientist should admit that that’s an unanswerable question right now. And like you’re, it’s just a matter of opinion. This question of whether the language models trained only on text can learn, meaning, um, presupposes a definition of meaning. And there are many definitions of meaning on offer, none of which is the official one. Under some of them, definitely yes. Under some of them, definitely no. And un under others of them. Like, it depends, right? There are very, very deep questions about language and cognition, right? Like, how are concepts structured? Then we have these giant language models that appear to do a good thing with language. And there’s a very deep question about like, whether there’s a connection that is relevant.  

Paul    00:00:59    This is brain inspired. I’m Paul. Uh, my guest today is Ellie Pavlik. Ellie runs her language understanding and representation lab at Brown University, where she studies lots of topics related to language. So in artificial intelligence, of course, large language models, sometimes called foundation models are all the rage these days with their ability to generate convincing language, although they still make plenty of mistakes. One of the things, uh, Ellie is interested in is how these models work, what kinds of representations are being generated in them to produce the language that they produce. So we discuss how she’s going about studying these models. For example, probing them to see whether something symbolic like might be implemented in the models, even though they are of the deep learning neural network variety, which aren’t supposed to be able to work in a symbol like manner. We also discuss whether grounding is required for language understanding. That is whether a model that produces language well needs to connect with the real world to actually understand the text that it generates. Uh, we talk about what language is for the current limitations of large language models, how the models compare to humans, and a lot more if you value brain inspired. There are multiple ways to support this podcast. Go to brain inspired.co to learn how I would really appreciate your support. Show notes for this episode are at brain inspired.co/podcast/ 163. All right, here’s Ellie.  

Paul    00:02:34    Was it Socrates or Aristotle? I, I always get them confused, but I think it was Socrates who worried that writing our language down, having a written language would make us dumb. Um huh And that didn’t happen. I’m pretty sure social media has made us dumb dumber. Um, our large language models going to make us dumb.  

Ellie    00:02:56    I was literally talking to a student about this this morning, asked the same question. I, I don’t know, maybe I’m a fundamentally optimistic person, but I, I really don’t see that happening. And I would more guess they would make us smarter in the way that like, there’s, like, it’s a standing on the shoulders of giants kind of situation, right? Like, if there are certain types of tasks that are routine enough that the large language model can do it, then it kind of frees up our time to focus on the harder things that can’t be done. I’m like, like I picture, like if you’re, if you work in some kind of job where like reporting is part of your day to day, and a ton of that time is spent just reading the various documents, not adding any insights of your own, just subs, selecting different passages, getting the citations all in order, paraphrasing what was written, making it sound nice, formatting it, and things like that.  

Ellie    00:03:49    And then only after you’ve done that do you get to do your critical analysis, make recommendations what you think way the pros and cons, you know, talk to people and solicit feedback. Like basically information that wasn’t on the internet that chat G b t couldn’t have done. Like, if that only happens after you’ve done the reporting, like great, you can have the report generated right, right away and you can jump to the, the next step, right? It does seem like there’s a lot more, you know, I think there’s more of a potential that it can mean we’re using human brains to do things that only human brains can do. Then it means that everyone will stop thinking. Um, I don’t know, but I’m sure there are people who have the exact opposite opinion of me.  

Paul    00:04:28    <laugh>. Yeah. Well, well, I’m, I’m sure there’s the complete spectrum, but in terms of like cogitating, right? So, um, a lot of people say, and this is, I think this is true, that when you have thoughts about something, if you try to write them down, you realize that they weren’t, they weren’t clear thoughts. And the act of writing them down clarifies your own thought process and how you think about whatever you’re writing. Absolutely. And so there’s this generative sort of process, um, that might be missing.  

Ellie    00:04:54    Absolutely. Yeah. So that’s, um, like, that’s the only way I know how to think through a problem is writing. I think my students get very annoyed where it’s like, we started a project and the next day I’m like, have you started writing the paper yet <laugh>? And they’re like, I don’t have any results yet. It’s like, doesn’t matter. You should be writing the paper on day one, because that’s how you know what it’s all about. But I, right? I don’t know. I mean, I don’t see why chat G p T means that humans are no longer allowed to use writing for that goal, right? So, you know, like, let, let’s, let’s grant that people still care about the work that they’re doing and are trying to do good work and are trying to affect the world in various ways, right? Mm-hmm. <affirmative>, like there are a lot of people who are lazy and are not trying to do that.  

Ellie    00:05:39    And, but <laugh> those people aren’t the ones who are really worried about them becoming dumb anyway. Like they’re already probably cutting corners in various ways, right? So let’s assume that people are actually in some pa aspect of life, people are trying to do the right thing. There’s things you can use chat G P T for that, it’ll automate it. And then there’s things where you’re gonna be like, no, the, the thing I was trying to do, I need to think this through myself. And so maybe what that would be is like you get chat G P T to generate a whole ton of stuff and reports for you, but you’re still gonna have to think through what you’re actually trying to do with that. Like, I, and at that point, you might have to do some writing or some drafting or some thinking on your own.  

Ellie    00:06:16    I, I don’t know. I guess I’m, I heard somebody use the analogy of like a calculator, like thinking of chat, chat, G B T, like a calculator for languagey tests. I kind of like that analogy. So when I’m thinking of an picturing a lot of like, people using it as a tool for like report summarization, search types of things mm-hmm. <affirmative>. Um, but under the assumption that they still actually want to contribute something to the world, right? So if maybe if you’re in a dead end job where all you’re being asked to do is generate the port report and no one asks you for your own opinion on the matter, like, that’s gonna be automated, but those are unfulfilling jobs anyway. And those people just wanna get that job done and go home and then go do the thing they are passionate about, whatever that is. Right? Um, yeah. But for the things that we’re passionate about, I don’t imagine us being like, well, Chad GBT is doing it just as well, so I’m not even gonna bother. Right? Like, we definitely do have things we can add on top of it.  

Paul    00:07:06    I mean, I, I know that you, you have, uh, switched or I don’t know if you’ve always just given oral exams instead of written exams, but mm-hmm. <affirmative>, um, there, there’s this worry that every, all the students are just gonna be using these prompt based language models to generate their mm-hmm. <affirmative> their stuff. But, but you don’t, uh, you, you require oral exams. Yeah,  

Ellie    00:07:24    I do have oral exams. I, um, and I’ve been, I think it’s funny. So I’ve had those for the past couple years and I’ve talked to my colleagues and something, it’s basically, I find it really hard to tell from someone’s written work whether they understand what they’re saying.  

Paul    00:07:40    I find it hard to tell from my own thoughts, even, you know,  

Ellie    00:07:43    <laugh>, right? I think it’s very much this, like you said, the kind of like I write to figure out to think through a problem. I try to write it and if I find that I’m failing to write it, it’s because I don’t actually know what I think. Right? It’s like kind of like mm-hmm. <affirmative>, the failures of writing and the failures of communication are actually failures of understanding. Like to me, if I can’t articulate it, it’s cuz I don’t actually know. It’s not that I know exactly what I wanna say, I’m just not sure what words to use, right? Like it’s almost a, so yes. So like spending a lot of time doing that, I can then see all of those intermediate pieces where I’m like, oh, I can predict picture having generated this text and it would probably mean I don’t really understand what I’m saying.  

Ellie    00:08:20    Um, but I find it really hard to then grade students based on this. Like, I, I don’t know, I just, I feel very uncomfortable giving student a bad grade, um, or a less than good grade for writing that I can’t tell if they’re just not a very good writer or they don’t understand what’s going on. Whereas if I can talk to them or even just hear them say it, uh, in spoken language, somehow that de clouds it a little bit. So I switched to that before chat g p t, just cuz I find writing hard to evaluate the fact that we find chat g p t also hard to evaluate just makes that point more clear. Um, so yeah, I think oral exams are great. Um, so yeah, I think more of that would probably be good for, for students and stuff.  

Paul    00:09:02    So I’m gonna keep a really, uh, broad picture here for a few moments before mm-hmm. <affirmative> and we can dive into any of your vast array of, uh, work like analyzing these large language models. But, um, first I wanna ask you, and I know this is not your area of expertise, but I had the thought today that, you know, with, with um, language models generating text, uh, all this text is gonna go on the internet and language models are trained on the internet. So there’s gonna be this cycle of them being trained on the text that they generate. And I’m curious, what if, if you have any speculative or otherwise ideas about what that might do to our language evolution, right? Because it’s not, yeah, there’s this whole cultural step the way that language evolves and I know that that’s kind of a debated thing as, as well, but this could really muck it up perhaps.  

Ellie    00:09:50    Yeah, definitely. I think that will be fascinating to watch. It’s kind of hard to anticipate, right? So, so yeah, that is definitely the case. I’ve heard people, you know, talk about the risk of this. Um, like even really simple things like auto complete have already probably had an effect on like language diversity, right? I’ve, I’ve used the example before where it’s like I get an, so I will often not use auto complete cuz I care a lot about being like, I wanted to say thank you with an exclamation point and it’s offering me thank you with a period thanks with an exclamation point or like something else. And I’m like, none, none of those are what I wanted to say. Um, but, uh, but I think a lot of people would be like, oh, that’s not exactly what I was gonna say, but sure. I’m gonna take the, um, the one, the fewest clicks, right?  

Ellie    00:10:34    Um, and so then you end up with this like just collapsing of ba very basic diversity in the language that could then kind of get fed back into the system and include less, even less diversity or yes. For like linguist analyzing it after the fact. You might see this kind of convergence. Um, and then yes, the language on the internet being out there and then retraining the models on like, you could end up in this kind of odd subspace of language that never, you would’ve never ever reached from having just humans generating it. Mm-hmm. <affirmative>. Um, so I think that’ll be interesting to watch. I don’t know, I, I don’t like to be too like mellow dramatically fatalistic about or something like, you know, technology and cultural exchange and all kinds of things always change our language and that’s kind of a beautiful thing about it.  

Ellie    00:11:21    It’s like you can see how it would adapt. Like, I mean, I could imagine there being some kind of hipster culture the way like vinyl and like home baking is, is in vogue right now, right? Like you can imagine people are like, I only write with like, uh, pen and paper and send letters and I’m gonna use language from the pre AI days or something, right? Like I can imagine that kind of thing coming up and then we would get some other interesting language that would arise. Um, yeah, I can imagine like the development of particular linguistic markers that people use to signal how much they are not the ai, right? Like the way that people will like spell stuff out rather than use, like I’ll type as soon as possible rather than a s A P because a s a P has taken on a certain connotation that as soon as possible it doesn’t have, and you’re like, oh, I wanna mark this as not being part of that lingo. I could like, I could imagine all kinds of interesting language evolving out of it that, um, like not so much of a, so I wouldn’t think of it so much as like a loss as like an evolution in the way language is kind of always evolving with the technology and the times. And it could be hopefully a very interesting new thing to study.  

Paul    00:12:28    Yeah. I mean, it’s always a danger to put a normative, um, stamp on, right? Whether the direction of evolution is good or  

Ellie    00:12:35    Bad, I suppose, suppose right? Because I, I mean language is definitely going to change in the next 50, a hundred years no matter what, right? So it’s just, it’s gonna change in different ways depending on the developments, but like no matter what it’s going to change. Like, we’re going to see evolution in language cuz that’s what language does, right?  

Paul    00:12:55    If, if you were, if I forced you to freeze yourself for X amount of years and then wake up thaw out and wake up and, and continue your research career at minimum of one year, uh, how many maximum of, let’s say a hundred. Cuz that seems like a long time from now. I don’t know. You, you can just specify it. I just don’t want you to say tomorrow  

Ellie    00:13:15    <laugh>. I mean, I kind of would say, so this is a very exciting time in this field, right? Like, I would not want to miss what’s gonna happen in the next couple years. Um, I think a lot of very exciting stuff will happen in the next couple years. Um, so,  

Paul    00:13:33    But this is kind of a question that gets out, but if I have to, how far you think, sorry, how far you think along we are in understanding in, in your satisfaction in understanding language models and whether we have the right tools and et cetera?  

Ellie    00:13:46    Yeah, so like I just completely pulled out of my ass number that I’ve been kind of using is, I feel like more than five, less than 10 years will be a window where we will have, and, and that, and some people like that seems weirdly like soon. Um, but like where we’ll have like a deep understanding of actually how the models work. Like I think that right, right now we don’t know what’s happening under the hood really in any, um, uh, real sense. And so that makes it hard to use the insights we get from them to actually inform what we think about language, inform what we think about humans answer questions. Mm-hmm. Like, are these human-like, like people just have opinions. People are like definitely yes, right? And people are like, definitely no, and basically it’s an unanswerable question. Like anyone who’s a scientist should admit that that’s an unanswerable question right now.  

Ellie    00:14:34    And like you’re, it’s just a matter of opinion and it’s unsubstantiated opinion. Like we just can’t say cuz we just don’t know really anything about the mechanisms. We don’t know about the representations. We don’t really like, we’re looking at individual model artifacts. Um, so I think right now we just know very little, but I think a lot of people are working on it. The field is advancing very significantly and I think I, I just see a lot of work from very good people trying to get just a much more low level theoretical mathematical understanding of what’s happening under the hood. And that’s pretty interdisciplinary work where there’s people in cognitive science and neuroscience also trying to look at that. So I’m hopeful that like we don’t have, we’re not gonna have to wait forever to, to have like, we’re not gonna maybe understand them in a hundred percent perfection, but I think we’ll be able to understand them at a much more rapid pace than we understand the human brain, for example.  

Ellie    00:15:23    Like I think they’re probably simpler systems than that. This is the kind of thing where I’m probably going on recording stuff that is completely wrong, right? Where it’s like, you know, I said they were simpler than the human brain. It turns out they’re like crazy. But that would be crazy. Yeah. But basically I think when the five to 10, uh, year timeframe we’ll be able to say some, something precise and meaningful about how they work that will then lend insights or at least answer some of these questions that are really like everyone’s trying to ask about, like, can they, like people keep trying to speculate on tasks they absolutely can’t do or absolutely will do. And I just think it’s utter speculation, but hopefully in five to 10 years it won’t be utter speculation. We can say with some, some level of rigor, like no, this is within the scope of the types of problems they can solve. This is within the scope of the crime. Like, I, I think we’ll have that within the decade, but I don’t think we’ll have it before at least five years. Um, but I, I don’t know where I got this numbers, it’s just, there’s that feeling, you know,  

Paul    00:16:17    <laugh>, so I Yeah, right. It’s an opinion. Yeah. So I would, I would need to,  

Ellie    00:16:21    So I guess I would, yeah. Yeah. So I guess I would thaw me out at five to 10 years. But that’s the thing, it’s like I don’t wanna miss the process of getting there. That sounds like the fun part. So I would be very sad. So I guess on that I would continue, I have to get frozen now cuz I would rather like go for five or 10 years, get frozen then, and then wake up in a hundred. That would be my preference.  

Paul    00:16:41    Oh, okay. I like that answer. I’ll accept that answer. Okay. Okay, great. Good. Besides if, if you got frozen, um, you would miss out on your, uh, child’s early days, early years, you have a one child as well.  

Ellie    00:16:52    Now we’re just getting deeply depressing. Like I don’t wanna think about I’m frozen and missing out on my child’s child. Yeah, she’s, she’s 16 months old right now.  

Paul    00:17:02    Oh, okay. I thought she might have been a little bit older. I was gonna, um, I was, I’ll freeze your child too. Why not? Um, I was going to,  

Ellie    00:17:09    Yeah, yeah, yeah.  

Paul    00:17:11    I was gonna ask. So, so this is kind of a big question, I suppose, or maybe a thousand questions in one. But, um, you know, having the experience, you know, of a 16 month old, uh ha has that changed your views on language? Um, and like it’s sort of, well, so here’s where the multiple facets of the question comes in because you were just alluding to how we don’t understand humans and we don’t understand language models. So like there’s, it’s really hard, you know, everyone just has an opinion. But has the advent of these language models and their press impressive capabilities, has it changed your own views on the nature of language, the, the, uh, where language sits in the hierarchy of our cognitive functions, you know, how special it is, et cetera?  

Ellie    00:17:57    Hmm. I don’t to give an annoying answer, like, no, it hasn’t changed like <laugh>, it’s like, uh,  

Paul    00:18:06    So annoying  

Ellie    00:18:07    Seeing the, seeing the, the massive pace of science in my field and also becoming a, a mother. No, these have not affected me at all. Like, I am still the same person <laugh>. But no, I think <laugh> like, no, I think in many ways, like I think knowing a lot, so I just said we know very little about how the, the systems work under the hood. Um, but, but from what we do know, right? Like kind of knowing how they’re trained, understanding that they’re learning these different patterns and learnings, associations and being able to replicate them, like that kind of stuff. Um, I think maybe I’ve always been kind of an on the, you know, I mean I’m an LP person coming up in the machine learning computational distributional semantics. So, so kind of maybe never thought language was that special in this regard. And seeing the success of it and knowing kind of where it came from, I’m not like, oh my God, I never thought, like, I guess I never was of the opinion like, we will never get here.  

Ellie    00:19:12    Maybe some other people were like, never saw this coming, but maybe I’ve just been like kind of in the optimistic, uh, group for a while that seeing it was like, okay, maybe I didn’t think it would happen this soon or something. But I don’t know, maybe I kind of, I, I don’t know. So from that perspective, I’m like, this type of behavior coming out of a system I don’t think makes me fundamentally question the nature of language the way maybe I would if I’d been brought up in a different tradition that was like more of a, you know, maybe a chompsky tradition or something that’s like, no, these models will never be able to do this. Um, so there’s that. I think, uh, like I’m very, very excited to see the models in their success, but I don’t think I was like, I never saw this coming or floored in that respect.  

Ellie    00:19:56    Um, and so that doesn’t, and then also just knowing a bit about their trained, I mean, I’m very much am interested in whether what’s happening under the hood could be similar to what we think about humans or could inform what we think about humans, but the way they’re trained is nothing like how humans are learning language. Yeah. And so from that respect, it’s like, I just don’t think of them as human like in that way, right? So it’s not like when I see my daughter learning language, I’m like, I, I just think of them as entirely different systems in that, uh, in some ways that it’s like the analogy isn’t even that, um, isn’t even as salient as I maybe I always thought it would be before I had kids. Like people are like, oh, it’ll be so cool for you as a language researcher, um, to see, but like when I see her learning language and stuff, I, I rarely am actually thinking in terms of analogies to these systems.  

Ellie    00:20:46    Um, there certain types of things. So like for a long time I had, um, this isn’t even really language, but I had like, there was this funny example in deep mind of like teaching this little thing how to walk with reinforce deep reinforcement learning. And it like walks like this, but there was an example of it failing and it was like laying on its back and like, like kind of writhing around to like scoot along the floor and people are like, this is a failure. Like it doesn’t have the right inductive biases. It doesn’t know blah, blah blah. But I definitely, I failed to get a video of it and I’m so upset. But before my, when my daughter was learning to call, she definitely did this. She like arrived on her back across the floor and I was like, okay, so this is not like a, an example of an inductive bias failure.  

Ellie    00:21:26    Like kids also do these ridiculous partial solutions of stuff that we think is weird in the learning process. So there’s certain types of things like analogies like that or like, um, these mismapping of words where she like over, um, so we were playing a, she likes to point to her belly button, um, and we were playing her name’s Cora, like, that’s cos belly button. This is mom’s belly button. Now she seems to think that for a while then if you said core or you said mom, she thought it meant point to your belly button, right? And this is like such a like neural net type of like overgeneralization thing to do where it’s like, oh, I saw these things in the same context and I can’t differentiate them. And um, so there’s certain types of these like errors that she makes them like, oh, that seems kind of like a neural netty error. Um, but then there are of course things that she succeeds on very quickly that neural nets would never do, like that she like learns very quickly. I don’t, uh, so I don’t know. So there’s some kind of analogies. I’ve now kind of been rambling on this answer a lot, but I don’t think there’s like a super, yeah, I don’t know. It’s kind of a piecemeal answer sometimes I think of analogy.  

Paul    00:22:30    Yeah. Well, okay, first of all, we almost named my my daughter Cora, but it became Nora instead. Oh, secondly <laugh>. Yeah. Uh, pointing at your belly button right, is an action. And I don’t know if we just need to jump right into this, but a lot of, some of some of your, um, avenues of research deal with, I know that you’re interested in, um, grounding, um, language like in the world and how we learn as humans, but so is this a fair, um, statement to make based on, uh, your thinking? Am I interpreting your thinking correct, that you think it might be possible for a, um, text only trained large language model that’s completely ungrounded to sort of learn that and get the grounding later? In other words, kind of learn backwards relative to, uh, humans mm-hmm.  

Ellie    00:23:21    <affirmative>.  

Ellie    00:23:23    Yeah. I, I think it’s possible. So I think a couple years ago I’ve been very interested in the grounded research. When I was starting, like at Brown when I first joined Brown, one of the things that I was, um, that was one of the things I was really excited to do because I worked a lot on semantics and stuff in, um, during my PhD. And it, that felt like the missing green was grounding. So this, I would say this is something where I have changed a lot or my opinion has switched a lot in the last couple of years because I was, and I wouldn’t, I wouldn’t say it switched. I still kind of consider myself like I’m unsure. But like five years ago I was like, no, definitely a grounding, embodied interaction is the missing ingredient. Um, we’re going to need that if we want the model to learn like meaning. Um,  

Paul    00:24:05    So you just, for, for people who aren’t watching the video, you used air quotes when you said meaning this is what is meaning. Do we, what is uh, do, are language models understanding what they’re doing well? What is understanding, uh, what is semantics? Why are, you know, right. Are we asking the right question? So anyway, sorry to interrupt.  

Ellie    00:24:22    Yes. Yeah, yeah, yeah. And these, like, these are the questions that like I spent all of my time when I’m not answering emails thinking about, cause these are like the really interesting questions, right? And this is what my students are all thinking about. I, and the reason I think it’s really, really hard is just the, every time you ask these questions, it has like some, uh, assumptions that some other field actually has an answer for you, like philosophy or neuroscience or something or linguistics that you can just go and get the answer and then we can ask the question purely by thinking about how the models work. But actually it’s like, it’s actively being worked on in all these fields. So these things are all coming along together. And so this, this question of whether the language models trained only on text can learn, meaning, um, presuppose is a definition of meaning And there are many definitions of meaning on offer, none of which is the official one.  

Ellie    00:25:12    Under some of them definitely yes. Under some of them definitely no and un under others of them. Like, it depends, right? So I mean, I think there are like many arguments you can make for why the language model maybe would be indistinguishable from a thing that was embodied. Um, some of these arguments are like, well, most of the concepts that we know, um, you never interacted with directly, right? Like someone just tells you about a thing and now you know about that thing. Like you don’t have to have like been to like Greece to have the concept of Greece and um, you don’t even really have to have seen pictures of Greece to have the concept of Greece. Like it’s fine. Um, and actually most of our concepts we probably get through this kind of chain of being told stuff. Um, there’s other arguments that are like, well, you know, maybe, uh, we can’t really, uh, tell like people, like all we can really observe is the way people associate concepts with other concepts.  

Ellie    00:26:10    And so you can’t really directly observe people’s perceptual experience. Maybe that’s not actually that important for the meaning. It could be like a part of it, but it’s not a key one. So there’s like arguments to those types of effects, which if you buy them for humans, then you could apply them to language models and say that they have the same thing. Of course, there’s other arguments that are like, no, the grounding to the world is the most important thing. Um, like that direct experience with it. And if you have that, and this is what you’re referring to, we we’re thinking just empirically, perhaps it’s the case that you could learn a structure that’s basically the same structure as what you, you would’ve had in the grounded case just by reading text and then post hoc map it on. And that’s something that we’ve been kind of exploring it empirically and trying to suss out whether that’s the case.  

Ellie    00:26:53    Cuz that’s really an empirical question more than a philosophical one, right? Um, so there’s like a lot of avenues to go and it, I think the question of whether what the model has content meaning really depends on what your perf what your theory of meaning is. And there’s a lot of them and it, they’ve been extensively explored in philosophy. A lot of people have like intuitive feelings about what they think the right theory of meaning is. And if you go and start studying a philosophy, you’ll realize that like none of those two intuitions quite, uh, aligns perfectly with any theory that stood up to enough rigor. And so it, it’s like not an easy question to answer.  

Paul    00:27:28    Yeah. Well, could you summarize, and I know that, um, we haven’t used the term affordances yet, but which is another, uh, debatable term whether an affordance exists and what it actually means and stuff, right? But can you summarize sort of what, what your, you know, uh, maybe high level kind of what you found with your initial tests of, um, this, you know, language, pre-trained language model grounding, first grounding after, uh, approach?  

Ellie    00:27:55    Yeah. So we did some kind of work on, um, on like, like basically having an agent, like observe objects moving around in a 3D world and trying to predict like, so that the way the model is trained is just predict where this object is gonna be at the next frame, given the trajectory up until now, right? Um, that was actually meant to kind of be the physical world version of the language modeling task. So in language modeling, you’re just like given each a sequence of words, predict the next word. And when we do that, given a sequence of words, predict the next word task, you get really nice abstract linguistic representations. Like we see this kind of representations of syntax and various semantic structures. And so that, um, is very exciting to see in language. So we’re like, well maybe if we apply a similar learning, um, objective over like the physical world, we would get the representation of these kinds of physical concepts.  

Ellie    00:28:46    And one of the concepts that we thought we might get is something like affordances, right? So like you would see something like the model should learn to differentiate, like, um, uh, we also did kind of basic verbs. And actually, like you said, there’s debate about whether affordances exist. I think this kind of like verbs versus affordances, like, like some studies just consider all of the verbs to basically be affordances. Others have a very small set of fundamental affordances. It was really hard to tell. So, um, but what we ended up looking for is like, kind of did the model learn these representations of like basic, uh, actions. Does it differentiate between like a rolling event and a falling event and a sliding event and a, um, those types of things. Um, and this was, I interesting cuz this was without any language there, right? So this was so is without like this is, yeah.  

Ellie    00:29:34    So this was like, do you, we were trying to answer, like starting to get at the question of like, maybe some of these concepts might like emerge or like people might develop these concepts just by interacting in the physical world and then later when they learn language, they’re just mapping language to them. Um, that’s like very different from, you know, the kind of environment where you might actually learn to differentiate the concept largely because there’s a word there that you’re trying to attach the event, right? Mm-hmm. <affirmative> mm-hmm. <affirmative>. So yeah. So we did find some good evidence that a lot of these concepts did emerge even without the language. Um, and that they, um, but not perfectly in a totally separate study. And that’s different, I can’t go into details, but we did play around with like vision and language models, some which have language and some which don’t.  

Ellie    00:30:19    And we did find like certain concepts, um, so in like the, the vision only models, the concepts of like bunny and squirrel were just not differentiated, right? It was like this was a model that was basically just trained to predict missing visual patches in a visual scene. And it doesn’t really differentiate between like small, furry woodland creatures because like, that’s actually maybe not that useful if you’re just trying to like, uh, have some notion of like visual predictability, <laugh>. Um, whereas once you introduce language, then it very neatly separates those things, right? Right. Um, yeah. And I would imagine that if we were to apply similar stuff to this like verb and affordance learning, it would be similar. There would be some things that are very salient and like useful concepts to form for the sake of like predicting the state of the physical world, but that unless you need to assign words to them, they’re not like sufficiently different to really, uh, to really, uh, distinguish. So maybe something like a, a push versus a nudge or something like those might just be part of the same category and it’s a matter of degree versus once we assign words to them, then you democratize it and you have to just kind of actually separate pushing and nudging or something in a more, um, more concretely  

Paul    00:31:33    Similar to like how I can only distinguish like three different types of trees. My dad could named like every tree and I, a tree is a tree to me. Yeah. And there are only three types apparently. Exactly. Um, yes,  

Ellie    00:31:44    <laugh>, I’m surprised you even have three types. I think I just, well, no, maybe I have two types. There’s like Christmas trees and other trees. <laugh>, maybe palm trees. Okay. I have three types, <laugh>.  

Paul    00:31:54    Nice job. So, um, so where are we then in, in terms of, um, I mean I know this is an unfair question, but, you know, just semantics, affordances, groundedness, the relation between all these things. What, how, how does the field view these are, are they, are they important? I know it depends on who you ask, but um, yeah, it just in broad strokes,  

Ellie    00:32:19    Yeah. This is where I want, this is where my kind of five to 10 year thing is kind of getting out. So like to me there are, there are very, very deep questions about language and cognition, right? Like how are concepts structured such that they support the kinds of things we can do, do with language. And then we have these giant language models that appear to do a good thing with language. And there’s a very deep question about like, whether there’s a connection that is relevant, right? Like, does the language model tell us something about that first thing? Like, does the success of chat g p t tell us something or generate new hypotheses, proposed possible explanations or theories about what might be happening in human language, what might be happening in their brain, the mind. And I just don’t think we can say at all whether it does or doesn’t right now, because we just don’t know how it works.  

Ellie    00:33:10    And so if we can, I think over the next five or 10 years, there’s some kind of basic science of just what is happening inside the neural network. And I think after we do that, we might be able to say something like, okay, we, so one option is we figured out what’s happening and it’s, it’s all a giant scam, right? Like the model has actually just memorized like, turns out everything chat G p T generated was exactly printed on the internet somewhere and it just recalled it, right? Mm-hmm. <affirmative> very unlikely that that’s the case, but that’s kind of the flavor of what a lot of people like, that’s an extreme version of what some kind of super skeptics feel like is happening with the neural networks, is they’re really just memorizing a large fraction of things and doing some minor tweaks or something that just deeply, deeply does not resemble compositional creative cognitive processes. Right? But,  

Paul    00:34:00    But some people would argue that we’re also doing that. Like, or Hasson’s, um, I don’t know if you’re familiar with his direct fit to nature mm-hmm. <affirmative>, I think it’s called. Um, you know, just like that our brains are essentially these, um, same sorts of things than just memorizing and  

Ellie    00:34:14    <laugh>. Yeah. So that actually, so that’s an interesting thing. So I think something that like, um, so like, uh, so definitely what we won’t find in like, well, some people might think we might find something like this in chat too, but it seems unlikely. It’s like a list of all the possible responses and when you put in a prompt it searches through something like a database finds that and regurgitates it. If that’s what we found, I think people would be pretty, well probably surprise actually. But, um, pretty unlikely to say, let’s go see if this is what humans are doing. Let’s see if this is where human speech comes from because okay, that seems like a very ridiculous thing to propose that like the way that I’m generating this right now is I’m like going through my inventory of possible responses to this question that I’ve memorized verbatim, but what we’ll probably find is some mechanism that looks a, has a lot of memorization involved and a lot of generalization involved.  

Ellie    00:35:07    And what we hopefully will have is a fairly precise story of how those things are getting mixed and matched and combined, right? Because we’ll be able to say we don’t know what humans are doing, but chat G B T is doing it like this, right? It’s memorizing this type of thing. It’s generalizing in this type of way. It’s defined these kinds of categories under this setting. It, you know, calls from memory under this setting it, um, decides to extrapolate or something, right? And if we had that kind of a story, then we could actually go look for similar, like we would have a more clear sense of what we’re looking for in the brain, right? Like, it, it would be a concrete proposal about something that could be happening and maybe some subset of those things actually could be similar to what’s happening in the brain.  

Ellie    00:35:43    So I think that would be like this really exciting thing and like grounding would be one of those things, right? Like we could actually say, here’s a concept that we always thought required grounding. We have a model like, uh, I’m scare quoting again around grounding all of these. That’s fine words, that’s fine. I might as well just scare quote my entire everything. Just, just keep going. Um, but, uh, so we could take some concept that we kind of assume requires grounding. We can figure out what, um, and this is a lot of ifs and ifs and ifs, but I’m imagining that we’re at a place where we have a good understanding what’s happening inside the malls. We could see what chat G p t or whatever the successor model that we’re looking at does with that concept. And then we can go try to see if the, the representation, like it, it would generate some predictions about what we should and shouldn’t see in humans if they’re answering or if they’re representing it similarly.  

Ellie    00:36:36    Right? And that could actually help us pretty precisely answer some of these questions that right now are very philosophical. Like right now, the question of whether, how important is grounding to humans is largely philosophical with some empirical data, right? I think. Um, so I think there’s a lot of potential for like a really interesting connection between these fields, but it’s hard to do when we just know as little as we currently do about how the models work. But I think it will be pretty doable when we know about five to 10 years more than we currently do. Hopefully even sooner than, hopefully sooner than 10, right? Um, so yeah, so, so some of these big questions you said about semantics and grounding and affordances and stuff, like, I think we might be primed to actually have really concrete answers like actual proposed concrete scientific studies to try to move ourselves forward on those questions as a result of these computational models. But I don’t think we can do it right now because they’re too black boxy.  

Paul    00:37:33    I’m gonna ask you about the difference between analyzing the representations versus analyzing the outputs of the models in a moment. But, uh, I can’t help before, but before that ask, I’m sure you’re aware of, um, work like out of the lab of ev Fedco who’s been on the podcast mm-hmm. <affirmative> and Ri Hasson, um, looking at the, the, uh, the representations of the models and correlating that to human based, uh, be human behavior mm-hmm. <affirmative>. And there’s this high correlation with predicting. So a lot of these large language models are, um, predicting the next word essentially is how they’re mm-hmm. <affirmative>, they’re, uh mm-hmm. <affirmative>, uh, objective. And it, it turns out if you, um, run the data through some bells and whistles, uh, there’s a, a correlation between humans doing the same thing. Um, and so, uh, how much do you think that next word prediction is, uh, is uh, the way that humans do it? <laugh>?  

Ellie    00:38:25    Yeah, I mean that’s a very, like, has to be a multifaceted answer, right? Like, I think there’s pretty good evidence that humans predict the next word, right? Like, we’re we have to good at anticipating we have to, but if you’re, what you’re asking is like, so like right now what I’m doing is not predicting the next word and then saying the word that I predicted to be the next word, um, based on probably the distributions over words. There’s, I, I mean this is like a, a, you know, we can get into like a subtle distinction about like what is the, uh, like how many latent variables are allowed to calculate into this predicting of the next word. Like if you say, well maybe I’m predicting the most likely next word conditioned on what I’m trying to say or something, then like Sure. Yeah. Um, but I think, yeah, I guess I don’t wanna make a hard stance of <laugh>.  

Ellie    00:39:21    It’s like, basically what does it mean to be predicting the next word? But I think most people, um, would argue that humans have a, a strong component that’s not a predictive one, right? There’s some goal directed something to what they’re doing with language, right? So it’s not, uh, so we, and then I, so one, one of the things I think needs to be analyzed in chat gb t is whether there’s similarly some other latent state that you could argue or I use chat G b t as like shorthand for like all such models, right? Like there’s a ton of these models, it’s like the name brand now, like, um, Kleenex or something, right? Like, I mean any of these models. But when we wanna look at these models, um, uh, you know, we we’re trying to figure out what they’re doing. Like what is the organization of the latent states that are, um, contributing to the predicted next word.  

Ellie    00:40:15    Cuz that’s where the real depth of trying to decide whether it’s analogous to humans, uh, is, is useful, right? Like, I don’t know if this quite answered your question, but I think like, right, like I think there’s clear, like clearly humans do a lot of predicting, uh, there’s a strong predictive component. We’re good at predicting. I think the distribution of words has a huge effect on how we learn them and what we learn about them. Um, but it’s just simp overly simplistic to say there’s nothing else that’s generating what a person says than just trying to guess what would be a likely continuation. Right? Um, with no other constraints, just saying, just purely thinking about a likely continuation, right? Because we have goals and agencies and, and, uh, intense when we’re generating, um, of course you could recast those things as being part of the predictive task and then Sure. Right. So, so then it kind of becomes equivalent about what we mean by predictive. Um,  

Paul    00:41:19    Do you think that, um, there there isn’t a set, um, like a standard set of criteria by which we evaluate models? Does, does there need to be, I mean, I know that, you know, you, you could take your 14 strands of research, right? And all the tests that you do and include it and like a battery of tests, right? To understand, um, an analyze these models. Does there need to be a kind of, I don’t wanna say benchmark set, but like a standard set of, um, evaluation criteria?  

Ellie    00:41:52    So I actually, this is, I feel like if a couple years ago you’d asked, I’d be like, of course this is the main goal. And I think actually I’ve been pushing a very different, um, line that which is, so I think I’ve really liked, I found it very freeing in my own work and I’ve, when talked to other people is moving to treating it like a natural science, right? So we have these models, we don’t understand how they work. And so the game is not just make them better, but test hypotheses, you know, formulate good experimental design to try to figure out what is happening. And so in the same way that you don’t say like, here’s the benchmark to, I mean, it’s a fundamentally different thing, but like other fields, this notion of like a standardized set of benchmarks or something is not really the, um, uh, the way you think about studying designs.  

Ellie    00:42:45    Of course there’s kind of the phrase, everyone’s responsible for the same data, right? So once there’s a set of studies and data out there, if you propose some new theory, it has to be consistent with all the other data that’s out there. Mm-hmm. <affirmative> in some ways that kind of feels like some massive benchmark or like a benchmark for theories or something. Um, but I think right now maybe, uh, NLP has, is a comment of this strong engineering culture and has like a real interest in, um, in standardizing and, and benchmarking models. Um, and I think that there’s a time and place for when that is exactly the right thing to do. And I think in the past, like we n l p advanced a lot because of this culture, but I think right now is not like, I think it’s not the right time to be trying to build the, the newest and latest and greatest benchmark.  

Ellie    00:43:34    I think that we just don’t know enough to know what that thing is. And a lot of resources could be wasted on things that just go stale very quickly or, um, worse, uh, we, we kind of, um, climb the wrong hill, right? So I think it’s, there’s nothing wrong with, again, going with my kind of five to 10 year timeframe, I think there’s nothing wrong with taking a couple years try to figure out what’s going on. Basically run some basic science tests on benchmarking and evaluation, try to figure out, um, like what are the, the dimensions of the models that we understand and don’t understand, and what are the things we’re trying to evaluate and can we try to come up with things that correlate with the outcomes that we care about or the things we, and then probably later, we’ll, we’ll want benchmarks again, because we’ll be in a race of just trying to build better models again.  

Ellie    00:44:19    But to me right now, like I see a lot of really smart people coming out with cleverly designed benchmarks that somehow don’t, they still don’t seem to really get at what we’re trying to get at. Cuz often what this takes the form of is let’s think of a really hard task. Let’s get humans to do it, right? Mm-hmm. <affirmative>. And so then the models, when they frus like to everyone’s frustration, crush it very quickly, we’re not really willing to accept that, therefore they’re human level. Um, and so what did the benchmark tell us? Cuz like we designed the benchmark presume, but then we say somehow success on this benchmark is not informative. Maybe failure would’ve been informative or something. I don’t know. I, it just feels like not the right time to be building more and more and more benchmarks. I think that taking a step back, thinking more critically, doing a lot of hypothesis testing, that’s probably the right move right now. It doesn’t mean it’s the right move forever for right now.  

Paul    00:45:10    Well, yeah, I didn’t mean, that’s why I didn’t wanna use the word benchmark, because that entails like improving model’s ability. Like if you pass, if you can pass this test with 90 x something percent accuracy, what I, what I was alluding to was the, the sort of hypothesis testing that you’re doing in, um, analyzing, you know, are the models, quote unquote understanding, do they have meaning or do they, do they have abstract symbols, concepts like in the representations mm-hmm. <affirmative> and kind of basing, um, comparing that to the way that we think humans do language because we don’t know how humans really do language, right? Right. So like there’s this kind of give and taken back and forth, but it’s, you know, I ask this because you do tests on these representations and it’s kinda like those tests that well, um, if the model is getting, if the representations in the model look a certain way, then we think it’s more human-like, um, or less human-like. Um, so mm-hmm. <affirmative>, I don’t, I’m not sure what I’m, what I want to ask you out, you know, I wanna ask you 15 different questions based on this topic, but um, like, are humans the right benchmark? Does it even matter if they do it like humans do? And do you think that they will  

Ellie    00:46:20    <laugh>? Yeah, I think it depends what you’re trying to do and who you’re talking to, right? So I think there’s, um, yeah, so there’s a, I’ve probably 15 different answers for each of your 15 different questions, but, so I would say I’ll definitely, I’ll start with the kind of the looking at the looking at representations. Cuz this is something that I just think is, um, just kind of all important right now. Like, just for our own understanding that looking at the behaviors of the models, um, we end up in this position of having like, uh, there’s multiple different things that could have been happening under the hood that could have resulted in the same performance, right? And so that from an understanding standpoint, if we, and there’s many reasons we might wanna understand the models. One is that we wanna understand them so we can trust them, right?  

Ellie    00:47:10    So that we can make them safer. So, um, so that we can explain their predictions to humans. One is we wanna understand them so we can make them better because if we understand what’s happening on the hood, maybe we can fix some problems, right? And another is we wanna understand them because we wanna use them as models of humans, um, and generate predictions about humans, right? Those are all, and I’m sure there’s many other reasons we wanna understand them honestly, just curiosity. Sure. It’s a really good reason to wanna understand them. Um, so I think that looking at behavior alone without looking at representations and mechanisms under the hood doesn’t help us understand them for any of those reasons, right? Um, but then for some of those reasons it matters if they’re human-like and for others it doesn’t, right? So if you’re trying to understand whether the success of large language models tells us anything about the neuroscience or cognitive science or linguistics that’s happening in humans, it matters if they’re human-like, right?  

Ellie    00:48:02    Um, so from that perspective it’s very interesting if you’re just trying to make them safe and trustworthy and understandable. It probably doesn’t matter if they’re human. Like, but you still need to know what’s happening under the hood. Um, so I think my lab is pretty interested in both. Like I am just deeply curious how the models work for the sake of knowing how they work. Cuz we’re com computational linguists and we work on building these models and it’s unsatisfying to see them working and not know how they’re working. But I’m also really interested in my students are really interested in understanding humans. So we often do a little bit of both, right? So, um, like theories about how humans work are a good source of hypotheses for how models work, right? So we’re saying like, we know humans are composition, well, we need hypotheses, right?  

Ellie    00:48:45    So otherwise you’re just pulling it outta your ass. Like, I don’t know where like you gotta start somewhere. Like, um, so if you say like, okay, models are doing a good job with language and one thing we’ve always said about human language is it’s compositional and it’s structured and hierarchical and it requires, uh, you know, variable binding and pre structure. It’s really natural to ask well as the model doing those things. Because if it is cool, then we know something more about how the model works and we actually can maybe say something about a possible mechanism that is implemented in neural hardware and could potentially be something we look for in humans. And if it’s not doing it, then we could either say, maybe we can revisit that assumption that language requires those things. Um, like if the model’s not doing it on the hood, it’s solving it some other way.  

Ellie    00:49:30    Um, or we can just accept that this isn’t a good model of humans depending on kind of who you’re talking to mm-hmm. <affirmative>. But it’s still useful to know what it was doing under hood for those other reasons, for the safety and interpretability and the practice. So I don’t, I don’t think like you have to decide where to look right now. I think thinking about what we know about how humans do, language is a good place. That’s like not a random place of what to look for <laugh>. Um, I also imagine this happening in the other direction. Like, once we know something about how models work, then we can use the models to generate hypotheses about humans, and then we can go look for those in humans. But like, I don’t know, to me, like using humans as inspiration is interesting because we care about humans, and it’s also a way of generating reasonable hypotheses about what might be happening under the hood.  

Ellie    00:50:11    I will also add that very often we’ve found that the models do have the stuff we’re looking for. Yeah. So people are saying like, why does it matter if it does or doesn’t? It’s like, this isn’t meant as a prescriptive thing. I’m not saying like, shame on you model. You have failed for not having these kinds of compositional representations. But often they do have the compositional representations. That’s an exciting finding. One, it makes me trust more that the models are gonna do a better job on future tasks. Like when I see, uh, implementation under the hood, that’s consistent with the one single best theory we have of how language, like the, the best theories we have of how language works involve these things. When I see those things in the models, it makes me feel like it’s not all tricks, right? Like something real is happening. Um, and I had another reason why it’s interesting when we find them. Um, I don’t know. I’m sure there’s another reason that <laugh>, but yes. Like, so very often we do find evidence of these kinds of things. And so I think that’s, um, uh, yeah. I think there’s, there’s something there, right? Um,  

Paul    00:51:12    Well, so a lot of your work has shown that yes, in these large language models, um, they do have a lot of the same syntactic structure that you find in human language. So that’s at the sort of an abstract, I, not, not psychological, but um, you know, when you can like decode the, the structure, right? And it seems to have like mm-hmm. <affirmative> the same given structure, um mm-hmm. <affirmative>, uh, but then mechanistically, so the problem, quote unquote of language, um, constrains the solutions, right? Um, like, uh, and, and there’s this issue of multiple realizability, your brain is different than my brain. We’re gonna have slightly different, um, neural firing patterns and, and otherwise, but could it be that given that large language models are not brain-like, dare I say, um, that they come up with a solution that is just a multiply realizable solution relative to how brains come up with that solution, even if the structures map onto each other, um, at the structural mm-hmm. <affirmative> level, right? Um mm-hmm. <affirmative>. And then of course there’s the, then we get into like whether you think, uh, that is what I think that you think is that, uh, based on your work is that you’re starting to be believe more that the semantic levels are being mapped, um, congruent between Simmons. Yes.  

Ellie    00:52:35    Yeah. And so I think we’re my, yes. I think my current, I’ll give you my current thinking on this, uh, issue where I’m like always, uh, just quite prepared and willing to be wrong and just Rene and, uh, all right here, so whatever. Right? But, um, good. But I think yes, I, I, so I think the levels of analysis is very relevant here. And I think what we’ve mostly been talking about is like, yeah, we were saying the semantic what sometimes gets called like the cognitive level or something. So yeah. At the level that we’re talking about compositions of symbols and things like that, that, um, that are kind of the, the traditional place that a lot of, uh, linguistics, formal semantics is thinking about, right? So, um, that, that kind of level, and that’s what we’ve been mostly thinking about, that these map on to each other.  

Ellie    00:53:21    And so I’ve been very excited to see evidence of things like the syntactic and compositional structure in neural networks as mapping out humans. Um, and we’ve been increasingly looking for more evidence of this. That said, I often feel when I talk about this, people think I’m being hardcore prescriptive, and I’m saying like, Hey, here’s this, um, this kind of old school symbolic story, and the neural net is gonna be doing exactly that. And in fact, if it isn’t doing exactly that, then that’s gonna be a source of its failures. I’m actually pretty excited. Like what I’m imagining is gonna happen is that in the neural net, we’ll find an implementation of this thing. It’s not gonna be exactly this symbolic thing. It’s gonna be the neural implementation, and around the edges there’s gonna be differences. And I’m really hopeful that those edges and those differences are exactly where we’ll generate new predictions about humans that hopefully we could go back and find in humans.  

Ellie    00:54:07    And this could actually be a better story of what’s hap this is like the dream world. It’s like a better story of what’s happening in humans, because somehow the neurally version of the symbol, uh, is, you know, can explain phenomena that the pure cy version couldn’t. Right? Um, oh, so, so that’s kind of the, the, I think a lot of our work is kind of looking for this symbolic structure in neural networks. Um, and it’s not because I think that they should exactly implement this symbolic structure, but they should neurally implement it, right? And I’m, yeah, and I’m optimistic that that neurally implementation better. So I, I think, um, uh, like I’ve, you know, and some of the papers that I think you’re, uh, referring to, like we cited photo. Once you cite photo, people are like, oh, okay, so you’re a photo person. I’m like, well, no, and I’m pretty sure photo would not be happy with that paper, right?  

Ellie    00:54:58    Like, he’s definitely not gonna be okay with the neural that’s trying, trying to claim that they meet any of these criteria. But I think, um, the point is, like I, I do believe in a, the, a lot of what we get from the symbolic story, right? Like these abstractions and positionality and, uh, filler role independence and all of this kind of nice stuff that we get in language, I believe we need those things. Um, but I also believe that the neural net can implement those things. Um, and it’s gonna be exactly those cases where the neurally implementation is different from the traditional one That’s gonna be like the really exciting insights that might shed insight onto, onto the human side of things. Um, so is, but this is the thing where over the next couple years, we might keep looking for this and we might just completely fail to find these symbolic rep, like some of this abstraction. Like, I’m prepared to spend five years looking for evidence of variable binding in yada yada, and just not find it, and hopefully find a good story for why the neural net can’t do it if in fact it can’t do it. Um, but right now I’m more optimistic cuz the things we’ve looked for, we’ve found,  

Paul    00:56:00    Let’s say the neural net can do it, and five to 10 years, you, you, uh, you nail it down. Um, so then is it, how do I think about a symbol then is a symbol in emergent property of subs, symbolic processes? Is it just an abstraction that I use in language form to understand, like, how do we think about symbols if, if they’re carried out with these distributed, um, processes?  

Ellie    00:56:27    There’s, I think in some ways the kind of, uh, flippant, like it doesn’t matter, right? Because at that point we have a more precise story of what we’re saying is happening. So if we’re using symbol as a way of defining a system, it’s like a, it’s like a, a, you know, a concept we have created to explain certain types of phenomena, right? Um, now we have an alter, we’re saying this is the model of let’s say everything works out. This is beautiful. We have this story in neural net world of here’s what the basic inputs are, here’s the representations it forms internally, here’s the algorithms, it runs over those representations, and here’s how that produces the behavior. I leave it up to you whether you wanna call the thing in there a symbol or not. And some people will debate that till the end of days, but to me, that’s no longer important.  

Ellie    00:57:18    Like I, right? Like it’s, it’s not always just semantics, but at some point it is just semantics. My hope is we could have a nice model such that that can be a conversation of just semantics. I don’t really care. This is the, the news story of what’s happening there. Somebody has, um, uh, referred, there’s like, I guess there’s a book out there that I’m, now, I’ve been told to read a couple times, but, uh, that kind of talks about this con conceptual, like, I think in the history of like mathematics, how, like, in order to get the proofs for particular theorems, you know, it’s like, oh, this thing that we, it turns out this thing we were trying to explain doesn’t really exist. It should actually be defined in this other different way. Like, you can see this kind of stuff happening in other fields, right? Where it’s like, if we get hung up internally on yes or no, I don’t know. That’s why it’s a terrible thing for me to bring up right now. So I’m not, I’m gonna just Oh, cause you can’t figure random keyword search and try to find it. Yeah, yeah, yeah. Somebody has said, that’s okay, sorry that I should read this book, and I’m like, I should read that book. Yeah. Ok. But you, but so I think this kind of thing comes up. Yeah, I’ll follow up after, but that won’t be helpful for your listeners.  

Paul    00:58:18    That’s fine. That’s fine.  

Ellie    00:58:19    Yeah. Yeah. Um, but yes, I think this kind of thing happens, right? Where it’s like we, you could imagine like at some point we do have some new model, and if we get really hung up on trying to say, well, there’s this historic debate about symbols who was right in that debate, like, who cares anymore, right? They were having a debate at a different time. This is the time right now, this is the model that’s under consideration now, call it whatever you want. Like, the question is, is this a good model? Right?  

Paul    00:58:46    So I I in your speculative guessing, um, you think that we don’t need, uh, neuro symbolic ai, you think neuro the, uh, neural net approach, it will eventually encompass some symbols as well.  

Ellie    00:59:02    That’s what I, that to me is like the, the thing we’re trying to look for and see if it will happen. Yeah. I think that there’s something quite beautiful about that. So I really hope it’s the case, right? Because, uh, and, and again, like I said right now there, I don’t have good reason to believe it can’t happen, right? Um, but yes, it would be like a neural implementation of symbols. Um, uh, I mean, you could still imagine that within the neural net there’s some stuff that’s more simpy and some stuff that’s more traditional, uh, neural netty, right? So stuff that might act more like a symbol would be something that, uh, that binds to roles, for example, right? So there’s like certain types of things that like you could imagine defining the symbol by the operations that can engage in, and in that case you could have some parts of the neural net that implement things that can, you know, promiscuously associate with lots of other things and other ones that don’t, they act idio automatically and they, uh, have that given output for given, like you could imagine finding both types of representations all within the neural network, right?  

Ellie    01:00:03    Um, I think that’s a quite plausible outcome that we would see different types of representations emerging for different types of concepts and problems.  

Paul    01:00:13    This is a naive question, I’m sorry, are are there manifold stories to be told with l language models? Because they don’t have like the traditional recurrence that, um, manifold like, um, low dimensional manifolds looking at the dynamics of a network. They don’t have that kind of recurrence, but, um, are, is there manifold work being done? There has to be,  

Ellie    01:00:34    I don’t, I can’t imagine that’s a naive question. There’s the right, like, uh, it’s like just if the word manifold is in your question can’t be too naive, right? That always sounds really like, uh, I never know. Really fancy to me, right? Like, like, oh, that math coming. No, I mean, there’s definitely people working on manifolds in any of the neural networks. Um, but yes, there’s no recurrence in the current language models. So I don’t know, honestly, that’s, I had one student that was starting to look at some manifold, like, that’s a bit out of my domain. So this is the kind of thing like maybe we need to be learning more about this, um, to try to understand what’s going on. And, and I know that’s a common tool in neuroscience, um,  

Paul    01:01:12    Just cuz, cuz language, um, can be thought of as a low dimensional cognitive mm-hmm. <affirmative> function. Sorry, I say that real slow. Um, you know, like, like our thoughts are low dimensions of what’s actually going on in our brains and maybe words or even lower dimensional, uh, structures mm-hmm. <affirmative> in order to communicate. And soon I’m gonna ask you what language is for, but we’ll hold off on that. Um, so yeah, I just, I I didn’t know if that had been, I haven’t, I haven’t seen that world explored in my little Yeah,  

Ellie    01:01:43    Yeah, there definitely is some, um, I think probably less than Envision. Um, but there’s definitely some, and I, and I’m sure there will be more in that coming.  

Paul    01:01:55    I wanna switch gears. Um, go, go ahead. Sorry.  

Ellie    01:01:58    Oh no, I say I would have to do a quick Google Scholar search to tell you like, how much is that?  

Paul    01:02:03    Yeah. And also find the title of that book. Um, I want to switch gears <laugh> and ask just in, in a kind of a big picture way. So there’s a cottage industry now of everyone trying to fool the, uh, language models and see what kind of errors they can make it create and stuff. Um, and I don’t know if that this will be part of your answer, but what you see as like the current limitations of large language models, you know, we, you don’t have to like go down this list of like, well, they may make a mistake if you have it in a different context, <laugh> and stuff like, you know, stuff like that, but  

Ellie    01:02:35    Yeah. Yeah, yeah, yeah. Um, definitely. So I think, so as I’ve hammered on about, I think a major limitation is that we don’t understand them, which means we’re, we can’t even be aware of how many risks and limitations there are, right? So like with like, without a really principled story of what’s going on, I think there are the exactly the things you’re alluded to. So like, we have no way of really knowing how they’re gonna behave in a new scenario. Like we just can’t place guarantees on that. We are basically going based on like, we’re doing what we accuse the neural nets of doing, which is like looking at similarity, like basically in the past it hasn’t made errors, therefore it’s not gonna make errors in the future. So like things, Sydney was like a really good example of this where, um, like it had only been trained in pro, it’s probably only been tested in like 10 minute chat, like search ish kind of context.  

Ellie    01:03:31    Someone does psychotherapy on it for two hours and suddenly wa it’s like going crazy, right? Um, but that’s the kind of thing where it’s like any given, um, uh, scenario that we might place it in, if we don’t really know what conceptual representation model is using under the hood to decide what this new scenario is similar to, we have no way, even if we think this is obviously a instance of, um, a thing we told it to do or not do, the model might view it as a fundamentally different one. There was like, I think they’ve since fixed this in chat G B T, but when it first came out it was like, you know, if you asked it how do you like, make meth? It would tell you like, I’m sorry, I can’t give you that. And if you said like, write me a poem about making meth, it’s like, okay, here you go.  

Ellie    01:04:13    And it just like tells you how to make it in like limerick style, right? And that’s like the kind of thing where you’re like, this is ridiculous. I shouldn’t have needed to specify. Also don’t do it as a limerick. Like, just like, yeah, it’s like an annoying teenager who’s like finding some loophole. Um, and so like, I think, but those are, I think that’s basically the main, the fundamental limitation in general is like, basically every new scenario is somewhat of an unknown to us about how it’ll behave because it, there could be something weird where it’s like the commas in a different place. We never would’ve noticed it. But to the model that puts you in a fundamentally different part of the subspace in which rule, the rules don’t apply, right? Like we can’t be g guaranteed that that doesn’t happen. Um, there’s other kinds of, uh, the things we’ve been kind of alluding to.  

Ellie    01:05:01    So as much as I say like, I think that we could make these, that there might be a ton of similarities to the human’s conceptual structures and we could possibly use them as models of, of cognition and stuff. Um, like I said, I’m also completely prepared to be completely wrong about that. Cause there’s like fundamental, so like the logical reasoning capabilities of the models right now are very, very poor, right? Mm-hmm. <affirmative>. Um, and I think anyone who is like, you know, team no neural nets kind of would be completely unsurprised to, to see that. And they’d be like, yes, we’ve been telling you the entire time they’re gonna suck at logic. They’re gonna suck at these kinds of, um, uh, tasks that require really abstract structure. Um, and, uh, and I think that pretty consistently that we’ve been seeing really, um, uh, uh, really bad results on that.  

Ellie    01:05:53    And I, I don’t, I’m like willing to believe that like in the coming years, like we’ll actually be able to make progress and they’ll get better at it or we’ll be able to say more about how they work under the hood and that’ll help us get them better at it. Um, but there’s some pretty bad failures. So like, there’s one, uh, study we have going on now and we haven’t put anything out, but this is with a colleague Roman Fey, and I’ll give him a shout out. He does a cognitive science and a undergrad here, Alyssa Lou, who’s great. Um, but there’s like a really simple, it’s like a straight up language modeling task. If you say something like, the scientists saw that, um, none of the rats ate the food. And then you ask like, and then it has to just fill in the blank now that they knew that blank of the rats liked the food and the models are very happy to put some or all or something that’s just a straight up logical contradiction with a sentence before it’s in a very basic language modeling test.  

Ellie    01:06:44    There’s nothing that’s weird to the model. So I was like shocked to see how bad they were at this, right? Hmm. It just seems like humans are great at it. They’re very resistant to just blatantly contradict the previous sentence and the models really, really can’t do it. So that just really suggests a fundamental lack of logical structure, right? I think that, um, we might find after several years of looking for it that this is the kind of thing that we’re unable to find in the model that they, that they’re not able to learn this abstract logical reasoning and then the kinds of things that would come from that. Um, I’m hoping that we will, that we’ll be able to find it. And I have a lot of students looking for evidence that they do learn these abstract logical structures. Cuz I’m hopeful that they will. But I also like, I think it’s 50 50 whether they will, but it, and, and I think a lot of the types of tasks we see them failing on have that flavor to them, like this logical reasoning. Oh,  

Paul    01:07:36    And you don’t think scale can fix that.  

Ellie    01:07:39    I don’t know. Right. So that’s why I say I’m kind of hopeful it’s, I I wouldn’t say it can’t, I wouldn’t say it definitely. Can I put myself as exactly 50 50 on ish  

Paul    01:07:49    <laugh>. Okay. <laugh>. Um, what do linguists think of language model so well there? Okay, another 15 part question for you here. A did did we have to go through, for example, bag of words to get to language models? And then b like what do linguists, uh, think of language models in writ large?  

Ellie    01:08:12    Um, well, I don’t wanna speak for linguists. That’s, uh, not fair to linguists. I’m sure every, uh, so I talk to lots of different linguists. That means you can speak for more <laugh>, right? Yeah. <laugh>, right. But, uh, yeah, so I, I probably talk to cognitive science and neuroscience more these days than I do to linguistics, um, just as an artifact of the projects we’re currently working on. But I didn’t talk to linguists a fair amount. I mean, I think they’re varied, right? I think, um, some are probably interested, excited. I think many are basically asking what does this mean for us? Like, does this tell us anything about the problems that we care about? Computer science has a general tendency to just show up and be like, Hey, we solved your problems without having asked what the problems were and whether anyone wanted them solved or that kind of thing. So that’s  

Paul    01:09:07    You, that’s you, right? You’re the, you’re the computer scientist in this story, so Yeah,  

Ellie    01:09:11    Exactly. Yes, yes. And I also like to, uh, unabashedly overclaim and tell people I’m solving their phones and, um, helping them whether they want my help or not, kind of a thing. <laugh>. Um, yeah, I mean, I think that there’s still a pretty, there’s still a pretty big disconnect between the types of things that, like people doing a PhD in linguistics are like working on right now and the types of things chat G p t or similar models or, so, like if your thesis is on like analyzing a particular morphological construction in a low resource language that’s like an isolate and doesn’t have, like, I, I have no idea what relevance large language models have right now, so that Hmm, eventually perhaps they can tell us some origin story about where the various innate structures come from. And like, like there could be some story down the road where there’s a really interesting connection here, but like right now, I think that’s how a lot of linguists would feel is like, I care about these very specific structures of the lexicon or something, or I care about this language.  

Ellie    01:10:09    And so, like, I’m glad that you found nouns and verbs, but like no one is currently publishing a thesis and linguistics saying, Hey, nouns exist, right? Like, that’s not really, uh, uh, the type. So like, I think in many cases when I talk to linguists, the kinds of structure that we’re super excited to find, they’re kind of like, sorry, why is this interesting? Like, what, this isn’t the kind of structure we’re thinking about, right? Um, but that’s giving them service. They’re obviously aware of the impressiveness of just the language coming out of the model. So my feeling is there’s a lot of kind of, uh, watching from a safe distance saying like, is this something that directly bears on the questions we’re trying to ask? Where the questions they’re trying to ask is like a deeply anal, analytical, descriptive project of how do languages work?  

Ellie    01:10:53    And right now, I don’t think large language models have something to offer, but like everyone knows like maybe in the future, perhaps they will. So I think there’s kind of that feeling, right? Um, for all, for at least. And I would mostly talk to like people who work in formal semantics. So then that’s like the, um, whether we need to go through like bag of words and stuff to get here, I would probably say probably, I mean, it’s hard to al imagine alternative histories of science, right? Like who built these different things? But I think like, you know, like early, so I guess this comes out of kind of the neural networks tradition. So people started working on neural networks like back in the eighties and stuff. Were neuroscientists and, and then you get like r and ns and things coming out of that. Um, so like, I guess like a logistic regression bag of words classifier is almost like came out of, um, maybe people who kind of saw work on like distributional.  

Ellie    01:12:08    I don’t know. I would say, actually no, I’m gonna take it back. I’m gonna say definitely yes, because I think that the early success of simpler statistical NLP methods led to a lot of investment in NLP and people getting excited and they’re being university programs and they’re being the entire team at Google. Like Google invests in this because search and retrieval was successful and that was the case because of like basic bag of words models. I think if we hadn’t had like, uh, promising like, you know, commercial successes in N L P on the base of those early things, we wouldn’t have the level of investment that’s led to this. So yeah, I would say definitely, but maybe not as a, um, more as an economic and educational trajectory than like a technical one. Okay.  

Paul    01:12:50    Oh, good god. So the, the, I bet there are so many people going into cognitive science or well, maybe, I don’t know, what would they be going into because of the recent success of the large language models?  

Ellie    01:13:00    Yeah,  

Paul    01:13:01    Computer science,  

Ellie    01:13:02    Probably a lot of people in NLP and machine learning. Yeah. We’re like NLP CS is seeing huge waves. I hope I would be so happy to hear that there are people going into cognitive and neuroscience because of language models. Because I think, I think that will be good for cognitive neuroscience, but I think it’ll be very good for research on language models, right? Like, I think that that kind of perspective would actually be really helpful to understanding how they work, recognizing some of the potential implications that are probably gonna be lost on computer scientists. Um, so I hope there’s people going into other fields because they’re interested in language models. That would be a really cool outcome. I know there’s a lot of people going to machine learning and ai.  

Paul    01:13:43    The the reason why I asked you about, uh, if we had to go through a bag of words, I, I just had Earl Miller on and I, his career, he’s a neurophysiologist, um, slash theorists slash lots of experimentalists, but his career spanned from when we were recording individual single neurons in awake, behaving animals to now like these like huge multi multi, uh, electrode probes. And I asked him if, if we needed to go through the single neuron phase, and he said yes. Um, anyway, interesting. So very interesting. I guess everyone appreciates their history, so I, I’m aware that you need to go in a few minutes. Um, so I have one more question for you and then if you have, and then I’ll, I’ll end up by asking you just a couple questions from, um, my Patreon supporters that I had, uh, who I whom I told you were coming on. Uh, what is language for  

Ellie    01:14:30    <laugh>? Yeah, that’s a <laugh>. So, so I think I would, I’ll give kind of two answers. And this is coming fresh from all this philosophy reading I’ve been doing for my recent class and stuff. But, so I think, you know, if I’ve gone to the head end to answer, I’m in the, it’s for communication, like the, the language that we’re thinking about when we usually think about language as a communicative tool, right? Like when I’m thinking of human language, writing, talking, things like that. Um, but the stuff I’ve been, the kind of philosophical answer, and actually probably consistent with more of the work we’ve done recently is the kind of language of for calculation, like language internal to the head, what sometimes gets called the language of thought. That’s like the photo stuff, right? Um mm-hmm. <affirmative>. So the fact that like when I’m thinking about like me talking right now, like I’m talking to communicate to you, most of the time that I am writing, I’m talking to communicate to someone, whether it’s my future self or usually students or my friends or whatever.  

Ellie    01:15:29    Um, but there’s also like a very valid thing that we care about, which is like the internal concepts in the head that I’m using to reason about the world, to make decisions and to produce my behaviors and stuff. And I think a lot of the time when we’re talking about language in, in cognitive science and stuff, that’s also the kind of language we’re talking about mm-hmm. <affirmative>, which is like, it doesn’t need to be realized in words or for communicative purposes to be like an object of study that we deeply care about, especially when thinking about human cognition. I think a lot of, um, I think both versions of language are very, very, um, relevant to be thinking about when it comes to language models. So I, I would basically say there’s kind of multiple versions of language, um,  

Paul    01:16:12    Multiple functions perhaps. So  

Ellie    01:16:14    School functions and they’re probably slightly different languages, right? Like the language that you use for communication isn’t necessarily the same as the language we’re using in our  

Paul    01:16:23    Right. Okay. Yeah. I’m gonna have Nick infield on, uh, who wrote the, this book Language and Reality, I believe, but he argues that languages for social coordination and not communication, or maybe that’s like a sub Okay. Communications like a sub.  

Ellie    01:16:35    Anyway, so I put myself in a super naive camp that puts all, that would consider all of that the same thing. Same, like, I think the same, there’s like two things. There’s like you thinking or you dealing with other people and I think communication and calculation and then, so yes, people who spend more time thinking about this in no way more probably subdivide the, you dealing with other people into all kinds of different sub things.  

Paul    01:16:57    But, so if, if Fedco argues and shows data supporting the idea that language is not for thought, but you’re arguing that mm-hmm. <affirmative>, there’s some sort of language of thought, um, that is for thought.  

Ellie    01:17:10    Yeah. And I would think, I don’t wanna say something that I will like  

Paul    01:17:16    Do it, do it  

Ellie    01:17:18    Well, no. So I, that’s not <laugh> you say something in controversy. No, I’m just, I’m trying to think of like, whether, like you said, would you argue that? I’m not sure. Like I think I do. Like, I think the work I’m currently doing, we’re very much looking for this internal computational process that we’re thinking of as a language, right? We’re thinking about symbolic concepts related in different ways, but the work that Eve’s doing, I know when she says something like, language isn’t for com or isn’t for thought, I feel like it’s not inconsistent with saying there’s also this other symbolic process going on. And it might be a matter of whether you call both things language or not. But I’m not, yeah, yeah. I’m, so I’m not entirely, basically, I’m not sure whether if Evan and I were talking about this, we would disagree or if it would be like, oh, that’s what you’re calling language. Oh, that’s what I’m calling it without talking to her. I’m not sure. Okay.  

Paul    01:18:07    Yeah. Well, the, the language. Um, so I’m also gonna have, uh, Gary Luon Luon on, um, next episode mm-hmm. <affirmative>, and I’m gonna ask him about, he, he deals with inner speech a little bit. And I’m gonna <laugh> I just as this com a complete aside, but I know when I am doing a task and I find myself using words in my head, you know, not out loud mm-hmm. <affirmative>, uh, it makes me feel so stupid. Like there’s no reason to be physic, like mentally talk, using words to talk to myself while I’m doing a task. There’s really no reason for me to be doing that. So I feel like it’s, uh,  

Ellie    01:18:39    I just feel dumb. I think I, I, I think I literally always do that. I can’t think of a time that I’m not thinking in words in my head. Um, so maybe I just am always being dumb <laugh>. Like, I think I always have a monologue going, like all of my tasks, I’m like narrating, like, I’m not, like I am typing on the computer right now, but like, I, like, there’s like narrator. Like I think I always have words going in my head that,  

Paul    01:19:04    Yeah, well I guess there are different, you know, pe some people don’t ever have visual imagery, some people never have, uh, verbal things going on in the head anyway. Right, right. Okay. So yeah, a, a few questions. I’m  

Ellie    01:19:14    Trying to picture what it would feel like to be thinking without there being words and I’m not sure what it’s <laugh>. That’s the weird  

Paul    01:19:20    Well, I mean, but so, so like I said, light language is this low dimensional thing, right? A word has like a very low dimensional quote unquote con maps onto a concept. But some of what we do requires high dimensional, uh, things going on, right? So the actual implementation is high dimensional. So there’s no reason me to, for me to like funnel it and waste that energy to the word. Oh,  

Ellie    01:19:41    Totally. Yeah.  

Paul    01:19:41    This is my dog I am typing, you know, or something like  

Ellie    01:19:43    That. Right, right, right. Yeah, no, I think I maybe, yeah, this is introspection is only so useful for, but I feel like I have that thing, but then I have like the running commentary reflection on it, which is inwards, right. So I’m like thinking about what I’m thinking inwards. Anyway, <laugh>,  

Paul    01:20:00    My favorite is when I say to myself, I say like, you idiot, or something like that. But there’s no reason to use words for that. I can just feel like an idiot.  

Ellie    01:20:08    <laugh>. Right, right.  

Paul    01:20:10    Ellie, I’ve taken you up to the very last moment. I know you have to go. Um, thank you so much for spending time with me. It was nice to meet you. And I will, um, see you in seven and a half years when I saw you and your daughter.  

Ellie    01:20:21    I look forward to it. Thank you so much. This was a lot of fun.  

Paul    01:20:40    I alone produce Brain inspired. If you value this podcast, consider supporting it through Patreon to access full versions of all the episodes and to join our Discord community. Or if you wanna learn more about the intersection of neuroscience and ai, consider signing up for my online course, neuro Ai, the Quest to Explain Intelligence. Go to brand inspired.co To learn more, to get in touch with me, emailPaul@brandinspired.co. You’re hearing music by the new year. Find them@thenewyear.net. Thank you. Thank you for your support. See you next time.