Support the show to get full episodes and join the Discord community.
Check out my free video series about what’s missing in AI and Neuroscience
Chris Summerfield runs the Human Information Processing Lab at University of Oxford, and he’s a research scientist at Deepmind. You may remember him from episode 95 with Sam Gershman, when we discussed ideas around the usefulness of neuroscience and psychology for AI. Since then, Chris has released his book, Natural General Intelligence: How understanding the brain can help us build AI. In the book, Chris makes the case that inspiration and communication between the cognitive sciences and AI is hindered by the different languages each field speaks. But in reality, there has always been and still is a lot of overlap and convergence about ideas of computation and intelligence, and he illustrates this using tons of historical and modern examples.
- Human Information Processing Lab.
- Twitter: @summerfieldlab.
- Book: Natural General Intelligence: How understanding the brain can help us build AI.
- Other books mentioned:
- Are We Smart Enough to Know How Smart Animals Are? by Frans de Waal
- The Mind is Flat by Nick Chater.
0:00 – Intro
2:20 – Natural General Intelligence
8:05 – AI and Neuro interaction
21:42 – How to build AI
25:54 – Umwelts and affordances
32:07 – Different kind of intelligence
39:16 – Ecological validity and AI
48:30 – Is reward enough?
1:05:14 – Beyond brains
1:15:10 – Large language models and brains
Transcript
Chris 00:00:03 Discussions around how we should build AI that don’t account for what we’re trying to build are essentially pointless. Right? There is no right way to build ai an anomaly arises because what we have is we’re still in the mindset where like, okay, the goal is to build, to like recreate a human, right. But suddenly we’re like in the natural world, and then it’s like, okay, so we wanna recreate a human in the natural world, right? And then this suddenly starts to be a bit weird. People in machine learning and AI research, particularly people who’ve entered the field more recently, say things like, it’s not clear what we have ever learned from the brain back from the study of <laugh>. So this is, you know, kind of in, in my view, this is,
Speaker 4 00:01:01 This is brain inspired.
Paul 00:01:03 Good day to you. I am Paul. My guest today is Christopher Summerfield. Chris runs the Human Information Processing Lab at University of Oxford, and he’s also a research scientist at DeepMind. You may remember him from episode 95 with Sam Gershman when we discussed ideas around the usefulness of neuroscience and psychology for artificial intelligence. Since then, he has released his book, natural General Intelligence, how Understanding the Brain Can Help Us Build ai. And in the book, Chris makes the case that inspiration and communication between the cognitive sciences and AI is hindered by the different languages each field speaks. Uh, but in reality there’s always been, and still is a lot of overlap and convergence about ideas of computation and intelligence. And he illustrates this using, um, tons of historical and modern examples. So I was happy to invite him back on to talk about a handful, uh, of the topics in the book. Although the book itself contains way more than we discuss, uh, you can find a link to the book, uh, in the show notes at brand inspired.co/podcast/ 159. Thanks for listening. Here’s Chris.
Paul 00:02:20 I was just looking it up and, uh, it was actually almost a year ago today that, uh, we spoke last, uh, you were on with Sam Gershman and it was mid-January, so, uh, <laugh>, I was just looking it up cuz I was thinking, what episode was that? When was that? And since then, uh, you, you have, uh, published this bright, shiny new book, natural General Intelligence, and I was, um, in the preface to the book, you write that it, it took you 10 months to write the book. And, and this is, uh, there’s so much in, in the book that that’s a blistering speed, it feels like to write a book. But you also mentioned, hey, AI is, you know, increasing, um, advancing at a super rapid pace. And so one, how did you write the book that fast? And two, how much worry did you have that it would be immediately outdated, essentially, the AI facts in it, right? So neuroscience, not a neuroscience says, right, psychology not advancing so fast, AI advancing very fast. So you didn’t have to worry so much that you’d be outdated in your neuro <laugh> in your neuroscience, maybe. Yeah.
Chris 00:03:23 Thanks for those questions. Yeah. Well, I, it didn’t feel very fast when I was writing it <laugh>, I can tell you maybe, maybe I don’t, I dunno how long it takes people to write, but, um, yeah, I think, um, one, one of the reasons that I found it relatively easy to, to write at least once I got going was, um, because I, some of the material is part of a course which I teach. Um, so the sort of structure of the book and the kind of the conversations and the arguments and the things that I felt I needed to explain were sort of present, um, you know, kind of in my mind. Um, so yeah, that’s, that’s, that’s one reason. And, you know, kind of also I comp purely incidentally, you know, kind, I just found that actually I really love writing. Um, I think, you know, kinda, I’m probably much better at writing than I am at actually doing other aspects of science <laugh>. So, um, I really enjoyed it. And, you know, I, I, um, the synthetic process of, you know, kind of getting those ideas down and trying to make sense of them, um, was personally like, hugely enriching. I’m sure many other people have that experience too, um, in terms of the book being out of date. Yeah. So obviously I would love people to read it. <laugh> it is,
Paul 00:04:33 It’s, it’s not out of date, right? The way,
Chris 00:04:35 I mean, you know, kind of, I, I would love people to read it, but the, the neuroscience of course is not out of date. And the neuroscience, um, you know, kinda, I I still feel like neuroscientists are to some extent, you know, this, this ideas come out in machine learning and AI research, and it takes a while for them to percolate through to the first, typically to the computational community and then out into the sort of wider communities. And that process, you know, kind of some of the models and ideas which are described in the book will be, you know, kind of probably news to a lot of people working in mainstream neuroscience, and certainly to my kind of, you know, the audience that I had in mind while I was writing, which is sort of like undergraduates who are interested in cognition and computation in the brain.
Chris 00:05:22 Um, so that’s definitely true. But in terms of AI research and like, you know, kind of the sort of what’s new, you know, sorry guys, but you know, it was out of date <laugh>, it was outta date three months after I submitted it. Dramatic research. I actually, you know, kind of oup, um, you know, oup actually a little bit faster to turn around, um, books than some other publishers. Um, but even so, you know, kind of by the time it was due to go to production, which was I think in July 22, like three or four months after I submitted it, the, they said, well, hey, do you wanna update it? Cuz has anything changed?
Paul 00:06:01 That’s what I was gonna ask. Yeah.
Chris 00:06:03 I need to rewrite the whole thing. Sorry. <laugh>
Paul 00:06:05 <laugh>. No, but did you, were you, were you in your, at least in your mind, uh, did, were you thinking, do I need to go and see if a new model’s been released and and update it? Or did you just say, screw it, I’m gonna, this is what, this is what it is?
Chris 00:06:17 Well, I mean the, you know, kind of the, um, the major thing, so obviously G P T three was out while I was writing the book. And I, and, and as was, um, the initial Deep Mind’s initial model. So Gopher, um, the language model was released while I was still writing the book. Um, but both were, well, the language models, you know, be between the time when I submitted and the time when it went to production. We went from basically having like, you know, a couple of language models to having a whole explosion of, um, and startups and so on. The other thing that changed dramatically was text to image generation. So when I was writing the book, there were a bunch of, um, models. So there was the first version of Dally, and then there was Glide, which was this sort of, I can’t remember,
Paul 00:07:09 I can’t keep up. I don’t, I don’t, yeah,
Chris 00:07:11 I think, I can’t remember if that was brain. No, I can’t, I probably should know this, but I can’t. Anyway, these early models, which, you know, everyone was like, wow, that’s incredible. But then compared to what came later to what, you know, Dali two in particular, and then, and now we have, you know, all those, they were, you know, they were a bit rubbish by comparison. So, you know, kinda, I had to go in and update all the figures in the book because I’d had these figures of like, wow, look what you can do with text image generation. And you know, if they <laugh> now what we can do is much better.
Paul 00:07:43 It’s almost like you, uh, we need to change the format and books need to be digital where you can just plug in the latest creation, right? Figure, figure 10, 10.2, and it just refreshes every month or something.
Chris 00:07:55 <laugh> Well, yeah, like something
Paul 00:07:57 Where
Chris 00:07:58 The, like news is, uh, <laugh> is animated. Yeah. I guess guess, um, that would be great.
Paul 00:08:05 Yeah. So, okay, so I didn’t read the subtitle of the book. The, the title is Natural General Intelligence, and the subtitle is How Understanding the Brain Can Help Us Build ai. And throughout the book, um, you talk about the, uh, all of the principles in AI that can be kind of mapped onto some of the things that we have learned throughout the years in neuroscience. But of course, in like the industry world in ai, a lot of people are fond of bragging that we, we don’t need to pay attention to any natural intelligence. And look, we did this on our own, and you’re actually learning from us. So sometimes it was the, the wording in the book was, you know, I think you used the word inspired a few times, um, and, and sort of mapped them together, but it wasn’t clear always like whether that the artificial intelligence advances were, you know, how much attention they were actually paying to the natural intelligence research.
Paul 00:09:01 Right. Um, so I don’t, I don’t know if you just wanna comment on that, uh, but because there are so many different, like when you put ’em all together, there are so many different, uh, ways in which they kind of can map onto each other. But, um, is it, I, I had the sense that AI world really was proud about not having paid attention to AI world, natural intelligence world. Do you think that’s the case, or do you think that, I know in your world, in, in like the DeepMind world and in your neuro AI kind of world, that is a, a celebrated aspect, but in AI world writ large, do you think that people feel the same?
Chris 00:09:40 Yeah, there’s so much to say here, let me, let me see if I can order my thoughts on trying to answer the question by
Paul 00:09:45 Parts. Obviously mine aren’t.
Chris 00:09:47 Yeah. So <laugh>, so the first, the first thing to say is, let, let me answer a question which you didn’t ask, which is about the converse direction, right? So as AI been useful for neuroscience unequivocally, yes. Right? So, you know, kind of our field over the past few years has been dramatically invigorated by this kind of no, what people call like neuro ai, or I don’t know, they have different, different terms for it. But yeah, the advent of, you know, connectionist models in the form of, you know, deep learning architectures, current architectures, increasingly D r L, you know, kind of like all of these ideas percolating into neuroscience has been great for the field. And it’s, you know, it’s sparked some interesting debates around, you know, kind of what are we trying to do in computation? You know, do we care about how much do we care about interpretability versus predictive validity for our models?
Chris 00:10:37 Um, you know, kind of what is an, you know, what is an explanation of a neural system if it has a hundred million parameters? Those kinds of questions, which have been very rich and I think good for the field mm-hmm. <affirmative> to have those debates. Um, and, you know, kind of just computational ideas flowing into neuroscience. And that’s, that’s great and we can talk more about that, but let me try and answer the question that you did ask. So, um, you asked about what people in machine learning and AI research think about the role of neuroscience or cognitive science and its relevance for their research. So there are many, many answers to this question because there are many diverse viewpoints. Um, let me, let me try and answer the first tackle one of them. And this is something that I hear quite fun. So sometimes people in machine learning and AI research, particularly people who’ve entered the field more recently, say things like, it’s not clear what we have ever learned from the brain, right? From the study of
Paul 00:11:36 <laugh>.
Chris 00:11:37 So this is, you know, kind of in, in my view, this is, um, a view this, this is an opinion which does not, um, uh, adequately reflect the influence that neuroscience has had on AI historically, right? So historically, um, you know, from the very first, um, uh, symbolic AI models, right? Which maybe today are, you know, somewhat outta fashion, but you know, kind of the fields of, of psychology and the data, cognitive science and neuroscience and AI research have been like, you know, kind of intricately intertwined in their goals, um, you know, to sort of understand thinking and build a thinking machine. Um, and then as we move into the era of connectionist models and then deep learning, you know, obviously the structure of these models is inspired by neuroscience in the sense that, you know, kind of network. They, they involve information processing in networks of neurons, albeit simplified networks of neurons, which obey many of the principles that we’ve learned about from neuroscience.
Chris 00:12:42 Um, and the people who built the first successful models were either trained in psychology on neuroscience or, or, or explicitly, um, you know, kind of have articulated how those ideas, uh, inspired them. And that continues to some extent up to the present day. So, you know, if you think about the latest architectures, you know, kind of, um, L S T M models, recurrent neural networks and L S T M models, you know, obviously have a very clear biological, uh, correlate or biological, I mean, I dunno if the, I dunno if you’re gonna Haas would say that, sorry. Um, um, uh, uh, Schmid hub Havass of Philosoph <laugh>, he didn’t build any neural as far as, but, um, Schmid,
Paul 00:13:27 You better get it right with Schmidt Huber too, because he’ll let you know, right? If you don’t,
Chris 00:13:31 The comparison to Haas is flattering, I think. But anyway,
Paul 00:13:34 Yeah. Ah, okay.
Chris 00:13:35 <laugh>, um, I dunno if Schmidt Hubba would say that he was inspired by neuroscience, but it’s clear that those, you know, kind of, there is a, there is an analogous, um, current of thought in psychology and neuroscience about how working memory happens, which is really, you know, really important. And then if you think about, you know, even up to the current date, right? Think about meta learning at learning a really powerful, um, approach. Um, you know, kind of directly inspired by, um, ideas about prefrontal cortex and basal ganglia in a paper whose first and senior author were both neuroscientists, you know, I think mm-hmm. <affirmative>, that that historical trend is there and it continues to this day. Now that doesn’t mean that people in machine learning and AI research always appreciate that
Paul 00:14:19 There, there are so many different things. I mean, like, I wanted to jump in when you said L S T M, because you tell the story about the mapping of LS TMS with the basil ganglia and the gating function account of basil ganglia. And actually, I don’t know that I had even, uh, appreciated that connection. I mean, I know that l SDMs, of course, anytime you have a new model in ai, the neuroscience world kind of, uh, grabs it and says, maybe this is a model for how the cortical column works, which has been done with the Lsdm also. But, um, so I have like, you know, tons of notes from your book, and we could talk and we will talk about, uh, tons of topics. But to get us into that, um, I’m kind of wondering, you know, along the process of you writing it, even though you wrote it at such breakneck speed, did you change your mind about anything? Or did you come to appreciate anything, you know, that stands out in your mind, uh, that you, maybe you thought this way and then, well, that’s what changing your mind is you thought this way about something and then as you wrote about it and clarified your thoughts and learned more information, um, you learned that maybe you should, uh, think about it a different way, et cetera?
Chris 00:15:27 Yeah, I think probably not globally, I think, you know, kind of really, but I think, you know, the book does not, I didn’t set out to kind of, um, really espouse one viewpoint or, or opinion, right? A lot of science books, including the ones, you know, many that I, I know and love, you know, what they do is they use a particular viewpoint as a way in to expose some problem. And, you know, they make the book, um, compelling through, you know, how compelling their argument is in favor of this view, right? Mm-hmm. <affirmative>, you know, uh, I could give you many examples. I didn’t set out to do that, right? I wanted to give something which was a bit more balanced, and I wanted to give voice, you know, in debates around like, how should we build ai, right? Like these perennial questions of like, you know, what are the constraints that we need?
Chris 00:16:18 Do we need really explicit constraints? Like some people have argued, you know, for like symbolic style or burial binding style variable, variable binding style, um, constraints. Um, I try to give voice to like both sides of the argument and often to try and point out where I thought maybe they were sort of missing each other, missing the point that each other was making. Um, and, you know, kind of, I, I often feel that my viewpoint may maybe, you know, kind of, this is, this is just me, but I often feel that my viewpoint isn’t sort of like, oh, you know, kind of where there’s two kind of different camps. It’s like, oh, I’ve, I’m strongly aligned to a or I’m strongly aligned to B My viewpoint usually tends to be, you know, when someone from a is evaluating ideas from camp B, this is where they miss the point and vice versa. Right? And, you know, my view is often about where to to opposing positions converge or diverge rather than, you know, necessarily strongly espousing one or the
Paul 00:17:20 Other. Is part of that talking past each other? How much of it is talking past each other and how much of it is a true staunch divide?
Chris 00:17:29 Yeah. I think there’s a lot of talking past each other, and there’s a lot of semantics involved, right? I mean, in the debate, you know, in the, in the later chapters of the book, I make reference to a very, very longstanding debate which relates to, you know, the question of, uh, as I mentioned the question of whether we need constraints on our models, which are quite explicit. So, you know, kind of, um, cognitive scientists, um, over through, throughout the past 20 years, I mean, of course before as well, but especially throughout the past 20 years, have, you know, taken issue with the undifferentiated nature of the information processing in connectionist or deep planning models. And they’ve said we need things like explicit constraints. Um, those might be, you know, kind of, they, they, they might be mechanisms which, you know, kind of, um, deliberately discretized information in ways that make it amenable to, um, to like composition or, you know, kind of the sorts of discreet or rule-based processing that we have traditionally thought of, of as part of what humans do.
Chris 00:18:42 Um, and, you know, kind of those people have, have very often kind of advocated for those types of constraints without asking the question of whether they implicitly emerge outta, um, the types of architectures that we build anyway, right? So, you know, kind of it’s clear that neural networks can learn to perform, you know, various classes of rule-based inference, and actually they can do so in, in highly structured way, ways under the right constraints and with the right inductive biases. And, you know, kind of very often people from the symbolic camp have sort of said, you know what? Rather than saying, oh, well, actually that validates my point, that, you know, kind of this form of computation is important, right? Instead, they’ve tried to focus on, you know, identifying corner cases when neural networks fail and saying, well, you know, kinda, you haven’t done it. Right?
Chris 00:19:39 And, you know, I think that’s made the debate a little sterile. Um, on the other side of the argument, you know, kind of, there’s a, there’s a sort of unpleasant kind of bri aggio in amongst, um, machine learning researchers that is like, you know, kind of a sort of muscular tendency to assert superiority because their models are, you know, kind of, um, they’re, they’re, they’re more effective in an engineering sense, right? Because they actually do stuff. Um, and, you know, to be really dismissive of ideas from, from cognitive science. And, you know, I think that’s also equally unhelpful because, you know, as, as yen la who’s, you know, someone who’s advocated for, of course, the deep learning approach, you know, he’s been a, not only one of the founders of that movement, but also kind of a, a spokesperson for, for that opinion. Um, you know, um, as he says, what does he say? Oh, no, I’ve lost my train of thought. <laugh>. Oh, he, it’s terrible. He says
Paul 00:20:46 More than a few things, <laugh>.
Chris 00:20:48 Well, he says more than
Paul 00:20:49 A, this is, I
Chris 00:20:50 Can’t remember,
Paul 00:20:51 Is this pertaining to the nativist? I empiricist, uh, divide the sy symbolic versus Yeah, sorry. Yeah,
Chris 00:20:57 Yeah. As he, as he says, um, like, you know, kind of machine learning researchers are of, in the very business of building inductive biases into their model, right? Right. You know, kind of those cognitive scientists who’ve argued for a, a kind of a nativist what they’ve, what’s been built as like a nativist approach, right? That we need to build in these constraints. Um, you know, what Lacoon has said is, well, you know, kind of machine learning advances through innovation on those, those very inductive biases, right? And, you know, that’s, that’s literally, if you’re a machine learning researcher, that’s literally your day job is to find those inductive biases. And I just think that’s true, right? I mean, so the conversation, yeah, the conversation has become, I think, you know, kind of circular and a little boring sometimes.
Paul 00:21:42 But, so you see, I mean, you point to inductive biases, biases and, um, talk about them at some length as you know, this middle ground, right? That, uh, this, the, um, symbolic AI folks should acknowledge that there is built-in structure, the, the nativists, right? That there is built-in structure, um, that it’s not just one big undifferentiated neural network. Uh, and on the other hand, I suppose the connectionist, which, you know, as you say, Janna Koon, who, who was, you know, one of the first people off of, um, Fukushima’s neo cognitron to, to start building that structure into what, uh, into the earliest convolutional neural networks. Um, that there should be an acknowledgement that, yeah, we, we, and and maybe there is already an acknowledgement, as you were saying that the inductive biases are necessary to advance.
Chris 00:22:30 Yeah, exactly. And I, I think often the pro, you know, and this is a point that I really try to make in the book, you know, kind of discussions around how we should build AI that don’t account for what we’re trying to build, Hmm. Are essentially pointless, right? There is no right way to build ai. What there is, is there are, there are solutions to, um, particular problems. And you know, very often when people disagree, they’re not actually disagreeing about the pathway to building the technology. What they’re disagreeing about is what the problem is in the first place, right? Mm-hmm. <affirmative>. And, you know, when we talk about inductive biases, you know, clearly, um, debates about the extents to which we should handcraft in particular constraints or inductive biases. Those, those are theories not just of like what will work, right? They are theories of what will work in the context of a particular environment that you have in mind, right?
Chris 00:23:30 Of course, you need to tailor the degree of, um, constraint in your model to the open-endedness or diversity of the, the data that you want it to be able to process, right? So, you know, if you only ever want the network to do one thing, then of course you should tailor it to do that one thing. <laugh>, it would be silly not to, right? If you wanted to do lots and lots of things, then it needs to be very general. And the question is, how general is the natural world? You know, if you imagine that what we want is to build something that, you know, kind of behaves in the natural world, how gen well, how general is the natural world? Well, of course the natural, natural world is, you know, infinitely rich and complex. But that doesn’t mean that it doesn’t have constraints too. And when we look at the inductive biases that have been successful, they tend to reflect properties of the natural world.
Chris 00:24:25 So, you know, in the natural world, I don’t know, like time goes forward. So it’s pretty useful to have a memory system. Cause that means that your behavior can be conditioned not only on the now, but also on the past, right? And the relevance of information in the past may not be like uniform across time, right? Things that happen recently tend to be more relevant than things that happen a long, long time ago. And there is a function that describes that relevance, and it’s not necessarily, um, you know, kind of completely linear as well. Um, and that means that, you know, we need not just one memory system, but multiple memory systems, right? We have memory systems which deal with the immediate past and memory systems that deal with the longer term. And that’s reflected in the types of, it’s reflected in our sort of psychology 1 0 1 like memory as modular, but it’s also reflected in the structure of the inductive biases that we build into our models.
Chris 00:25:23 And, and Ltms a great example, right? You know, what it does is it says, well, you know, actually we need to maintain information over two time scales in memory. One, which is sort of ongoing and active, and one which can be placed into a kind of pending state and then released when needed. And that turned out to be a really good solution, probably because it matches a property of the world, right? That some information, you know, you need to pay place in a pending state and then be able to use it when it comes up, which is very useful if you want to not use, lose your train of thought, for example.
Paul 00:25:54 <laugh>. Well, speaking of losing one’s train of thought. I mean, I’m gonna not, uh, I was gonna ask you about this later, but I’ll just bring it up now because you were talking about the properties of the natural world, and this, this comes toward the end of your book when you’re talking about artificial general intelligence. And you, you, a a moment ago, were mentioning about, uh, what you build depends on the problem that you want to solve. And I mean, you talk, you talk about umm vez in the book, which is, um, the term given to, you know, the, um, the relationship essentially between an organism or an agent, I suppose, an artificial agent. And it’s the environment that it’s dealing with, it’s, it’s capabilities within that environment, and it’s perceptual world is built around its scale, its needs, its desires. You know, an ant has a very different umm mve than we do.
Paul 00:26:44 Um, and in, you know, not the same breath, but you also also talk about affordances, which, which is that relationship, um, based on like what we can do based on the properties of the natural world. You know, we, you know, can reach for an apple based on, um, our abilities and, uh, the, um, accessibility of the apple and our abilities to eat it and consume it, and so on. And I, you know, I I’ve just been pondering the, the usefulness of, um, creating artificial intelligence in, in the guise of humans, uh, and whether an a AGI or any AI could ha could ever have, um, be exposed to the same things that we’re exposed to, right? Because you also talk about homeostatic needs. Why would we want an AI to ever have homeostatic needs and homeostatic needs shape our, the problem? Okay? This is always coming, coming back to the problem that we’re solving, which is staying alive, um, reproducing, right? The problems that we’re solving. Why would we want to build anything to solve those same problems, um, to, to essentially have the same structure and relationship with the natural world? I’m sorry, that was a long-winded commentary and, and question, or,
Chris 00:27:58 Or no, I, I,
Chris 00:28:00 I, it’s clear. I mean, yeah, it’s a, it’s a great question. I mean, you know, kind of at, at the root is the question of like, you know, kind of what do we want to build and why? And the answers to that have changed, and they’ve changed in a way which reflects our different theories of cognition and our different understanding of biology. So, you know, kind of when people set out to build AI in the first place, you know, kind of the, the, the goals that they set themselves were very, very divorced from like the, the everyday structure of the environment, right? So, so, you know, kind of, um, if you think of the initial like reasoning, like, um, reasoning systems, you know, kind of whether it was like, you know, kind of, uh, systems for, for proving theorems, um, uh, through, through pure logic, um, or systems for, you know, kind of playing games like chess or whatever, these are pursuits which are kind of very divorced from, you know, the kind of, uh, quotidian, um, embedding in the natural world with, you know, kind of that’s characterized by, by bodily movements and, you know, interactions with social others and, and so on, right?
Chris 00:29:15 And so, naturally, you know, it was, it was quite easy to have a paradigm that says, okay, well, humans can do X, so let’s try and build something that does X right? And that, that was the initial paradigm. And that, that paradigm kind of continued on into, you know, kind of the deep learning era in which suddenly we have models which are grounded in much more naturalistic settings. And here an anomaly arises because what we have is, we’re still in the mindset. We’re like, okay, the goal is to build, to like recreate a human, right? But suddenly we’re like in the natural world, and then it’s like, okay, so we wanna recreate a human in the natural world, right? And then this suddenly starts to be a bit weird, right? Because it’s like, why would we wanna do that? Like, already, you know, there are 8 billion people on the planet, right? <laugh>, if you wanna create humans, it’s actually not that hard to create new humans, or although, you know, it does take two of you and a little bit of time.
Paul 00:30:08 It’s sometimes it’s hard. Yeah. But yeah, I, I get what you’re saying. Yes, <laugh>.
Chris 00:30:13 Yeah. So, so you’re, you’re absolutely right to point that out. Yeah. So it’s not, it’s not always, it’s not always simple, straightforward for everyone. But in general, on average, we, you know, as a, as a, as a population, we have been able to, uh, you know, kind of to populate the planet without recourse to, to ai. Um, so yeah, I mean, the, the, the question is where, where do we sit now, right? And how do we reconcile this idea of, like, that the goal of AI building is somehow agent hood mm-hmm. <affirmative> like the, the recreation of human agent hood with the kind of much more general problem of like, what would we actually do with the system if we got it right? And it becomes relevant because as we realize more and more that our intelligence is human intelligence is, is grounded in those very facets of our behavior, which are like, kind of not really, you know, the, the, the e the slightly esoteric, um, you know, kind of ability to reason about abstract problems or whatever, but are also grounded in, you know, things like how we interact with each other socially or even grounded in like, you know, movements of our body.
Chris 00:31:28 Right? You know, object recognition is part of object recognition is, is not just being able to attach a label to something, but being able to pick it up. Right? And when we take that on board, then suddenly we ask ourselves the question, what if we built these systems? Would they actually have the same bodies as us? Well, maybe, but maybe not. And then you ask, well, would they have the same like, social life as us? And then you’re like, suddenly into super weird territory. Mm-hmm. So, you know, almost certainly not, right? You, you, you, you, I mean, unless we get into sort of weird, it would be fiction, it would be crazy.
Paul 00:32:07 Yeah, no, it would, I mean, right. I don’t know, you know, self-organizing systems, et cetera, always come back to, you know, to this, well, let’s say a, uh, an attract state or something, but I just can’t imagine, um, that trying to emulate something, trying to emulate humans in, in building an ai, the affordances, the mve, the problems to solve, uh, uh, the whole collection of problems to solve, it’s gonna be so different that it just, it has to be a, a very different kind of intelligence,
Chris 00:32:36 Right? So that’s the argument, right? So if you cut out those things, if our intelligence depends upon, you know, how we interact with the world physically and, um, and socially, and we, we a priori just, you know, kind of don’t wanna build an agent that would share those properties, then it means that it’s intelligence will be different from us. You know, in the same way that, you know, kind of, I, I, I love, um, there’s a book by, um, uh, a primatologist called Fran Dewal, very well-known primatologist called, are we Smart enough to Know How Smart Animals are? Mm-hmm. <affirmative>, um, it’s a well-known, uh, very successful popular science book. I love that book. And, you know, basically it’s a sort of, it’s, it’s a story of anecdotes from across his career and, you know, really a good experimental research as well, which shows that broadly, you know, animals are incredibly smart, but they’re just not smart on our terms.
Chris 00:33:30 Right. And we deem them as being less smart than us because they’re not smart in the way that we are smart. Right. And I think when we evaluate the intelligence of ai, it’s often exactly the same thing. You know, we sort of say, oh, well this agent isn’t smart because it doesn’t understand this thing that we humans understand. And, you know, kind of, well, often there’s no reason why it should, right? Because, you know, it doesn’t go to the opera or, you know, did go to dinner parties or do whatever humans do. Um, you know, kind of many, many other ways of interacting with each other, of course, but yeah. You know, and so, so we wouldn’t expect it to share our intelligence. So the paradigm’s changing. It has changed, and people are realizing that now. I think
Paul 00:34:14 Maybe before, well, yeah, sorry, I’m just jumping around. But, uh, I’m just taking a, taking cues off of what you’re talking about to talk about more things that you discussed in the book. And, and one is the concept of intelligence. Uh, and we were just talking about how we have a very particular kind of intelligence, and we define all other animals and AI agents based on our notion of intelligence, but our notion of intelligence has changed, uh, the focus of what we have considered intelligent, um, behavior and thought has changed over the years. And I don’t know if you wanna just kind of step through a few of the ways that it’s changed, but, uh, and or just acknowledge that it’s an ongoing process, <laugh>.
Chris 00:34:53 Yeah. I think maybe there’s one thing to say. So, you know, kind of, I, I worried a lot about what to call the book, and I wanted to call it natural general intelligence by analogy with artificial general intelligence, obviously. But there was something that really put me off the title, and that was the word intelligence itself, because that word has a really bad rep and with reason, right? And that’s because, you know, kind of the whole notion that, you know, intelligence is this sort of, you know, this essence of individuals that we can use to kind, that we can quantify and we can use to categorize people as maybe having more of it or less of it, and so on, you know, has, has rightly, you know, kind of been viewed as like, you know, kind of of elitist and discriminatory and, um, more than just being viewed as that, you know, kind of intelligence testing. The, the, the psychometric measurement of intelligence has not so much laterally, but certainly throughout the 20th century, has been used to basically, you know, kind of reinforce cultural stereotypes mm-hmm. <affirmative>, um, and, and worse, right? You know, some of the earliest uses of, um, intelligence testing were related to, you know, eugenic theories about who should and should not be allowed to, to reproduce. Right.
Paul 00:36:11 <laugh>, those were bad, you know? Hang on.
Chris 00:36:14 Yeah. So, so, you know, rightly, you know, the word intelligence, I think turns people off. Um, and, you know, kind of, I do try to highlight that in my book, and I highlight it a little bit, sort of by analogy, but with the point that I made about our failure to understand animal intelligence, right? That from the perspective of the sort of, you know, the western ivory tower in the developed world, you know, kind of very often it is an utter failure to understand the situatedness of intelligence in other cultures or other modes of behavior that has led to, you know, kind of, um, discrimination and bias in the way that we think about other people, right? And that you can see that in the history of intelligence testing, um, which is a pretty, you know, dismal, um, it’s a pretty dismal history in some ways. I mean, I, you know, I’m sure that there is, we, we have learned things about, you know, kind of the structure of cognitive ability, and I wouldn’t want to, to take away from those who work in that area, but I’m sure everyone who’s working in that area today would acknowledge that it has a darker side. And I think it is the, it is the reflection of exactly the same point that I was making about, you know, kinda our, our failure to appreciate animals abilities.
Paul 00:37:32 First of all, maybe, maybe there should be a caveat when defining intelligence and, and you talk about the common, uh, leg and hu definition, they collected 50 or so different definitions of intelligence and kind of whittled it down to a very small adapt ability to adapt in new environments. I believe something I’m paraphrasing, but maybe, um, maybe they, maybe the little footnote should be given the particular set of problems of the agent or organism <laugh>, you know, because it really, it does depend on, on the, the, the problem state, the set.
Chris 00:38:04 Yeah, absolutely. Absolutely. And I think AI research has, you know, kind of, um, as I said it, I mean, may, maybe this is a, an unfair and sweeping generalization, but coming from cognitive science, but spending time with machine learning and AI researchers, I often feel that, you know, kind of the in machine learning and AI research, they’re much better at the solution than they are at the problem, right? Mm-hmm. <affirmative>. So, you know, kind of, they’re, people are really good at like, you know, kind of, of finding inventive solutions to particular problems, right? But they’re much less good at thinking about what the relevance of that particular problem is in the first place. You know, the whole field is structured around benchmark tests, right? But nobody really asks where these benchmark tests come from or what they mean. I mean, people do, but, you know, kind of, it’s not the sort of day-to-day, um, topic of conversation. Whereas in cognitive science, you could always make, almost make the opposite, right? People spend so much time worrying about the problem that they don’t actually build models that are any good, um, which of course is another untrue sweeping generalization. But, you know, if you wanted to characterize Theum, you know, that would be one way of thinking about, you know, kinda what the difference between
Paul 00:39:15 <inaudible>. Well, I, we’ll get into talking about like the current state of reinforcement learning, uh, models later. But, um, what do you, you know, there, there’s a large, um, push these days in reinforcement learning for agents, and you talk about this in the book to, um, use more ecologically valid tasks, right? So there’s a lot of gameplay in, um, in like navigational maze type, uh, spaces and playing multiplayer games. Um, what, what do you see as the current state of that? I mean, is that close enough to ecological validity? Are we still super far away? And again, with the caveat that it depends on what we’re trying to do, right? Um, but a assuming we’re trying to build like some human-like agents or something.
Chris 00:40:00 Yeah, there’s a, there’s obviously a lot to say here. So, reinforcement learning has been one of the dominant paradigms within machine learning and AI research for 40 years now, or, or something like that. And, you know, it kind of grew out of a, um, you know, the, the, the methods in l grew out of an approach, which was to try and solve control problems, essentially, right? These are problems in which there’s a relatively clearly defined objective. And what’s difficult is to work out how to get there, right? And what l does is it uses various methods, you know, some of them based on sort of, you know, kind of learning, um, you know, kind of stored values for states or actions, um, and some of them based on, you know, sort of explicit search methods. Um, but it, it uses a variety of approaches to try to satisfy a well-defined objective. Um, so that’s why RL lends itself so nicely to things like playing go, right? Because there’s a well-defined objective, like, you know, well, I mean, we can, we can talk about what the objectives might be there, there are many objectives actually when we play games. Um, yeah. But we, it’s easy to make the assumption that the objectives just to win
Paul 00:41:17 Win. And
Chris 00:41:18 If you make that option, then RL is wonderful, right? And so, you know, kind of in video games, um, we also have a clearly defined objective. Most of them have a score, and you keep on going, you try to maximize your score, and then you go on the leaderboard. And if you, if you die, then it’s game over and you know, you gotta start again. So it’s very clear. But, but life is just not like that, right? So there isn’t in life No, no. Nobody says, you know, kind of this was your score for today, right?
Paul 00:41:48 Sometimes, but Yeah.
Chris 00:41:51 Well, do they, I mean, not, no, not, not in,
Paul 00:41:54 Not in general real setting. Not in general, yeah.
Chris 00:41:58 Yeah. You know, you might get a score, you get a score for specific things, right? You know, maybe you’re very happy, right? Cause you did well on your account, whatever, but like, you don’t get a score for like, you know,
Paul 00:42:09 A data score.
Chris 00:42:09 I dunno how nice you were. Yeah. How nice you were to your colleagues or like, you know, weather, like you enjoyed the sunshine, or, you know, kind of what you don’t get a score for what life is really about, right?
Paul 00:42:20 Yeah. Yeah.
Chris 00:42:21 And so the paradigm is hard to translate. It’s really hard to translate. And, and yeah, the, the, there’s no, there’s no fine final point when you just win at life, right? It’d be nice, but there isn’t. So, so, so what actually happens in the real world, and this is a huge paradox, right? Because we have this paradigm and r l of all of the, the paradigms that we have in machine learning and AI research, r l is the one that had the biggest impact on psychology neuroscience, right? Because it grew up in this intertwined fashion with animal theories of animal learning. And we know that the neurophysiology of, um, you know, subcortical to some extent cortical systems is, you know, a really good match to what is actually implemented in successful algorithms, which have these optimality guarantees, which are, you know, really nice and whatever.
Chris 00:43:09 So there’s this beautiful, beautiful story, but at the same time, like the whole paradigm is just a convenient fiction, right? It’s just not true. Like there are no, in, in l RL is when when you build an L model, what you’re doing is you’re working within a paradigm called a markoff decision process. And you know, within that paradigm, you have these things called observations, which are like sensory data and these things called rewards, which are what you’re trying to ultimately maximize over, you know, some, some theoretical, future discounted, and they’re different things, right? But in the real world, this is just not true. You know, there is no difference between the taste of the apple and the reward of the apple. Mm-hmm. <affirmative>, right? The reward of the apple is its taste, right? Or maybe it’s the feeling of, you know, kind of the satisfaction that you have when you were, you know, you were hungry and you’re less hungry anymore. But that itself also is a sensation, right? There are no rewards, there’s just observations, right? So,
Paul 00:44:10 Well, but, but the oral
Chris 00:44:11 Paradigm in the real world,
Paul 00:44:13 <laugh>, but the reinforcement learning world would want to, uh, map reward onto, let’s say, level relative level of dopamine, right? Um, with unexpected value or something. And because that’s the computational push, push to, um, map it onto a, uh, tangible fable, <laugh> value. Um, but you’re talking about quia and the feelings. Um, and so we, we don’t have qu quantification of our feeling of awe yet, right? And but isn’t, it’s not,
Chris 00:44:43 It’s not about, it’s not about quia. It’s not subjective, it about the flow of information. Rewards come from the world, right? The environment. It’s like the researcher decides that apple is worth 10 points, and when you eat it, you get 10 reward points. When you have a, a, a, when there’s an increase in your dopaminergic signal, that’s not like the, the environment impinging directly on the brain to produce that signal. That signal is produced by the agent. That’s autonomy. It’s the agent that this time, yeah, the agent decides what’s good and what’s not, right? But, but the reason why this is really difficult in the RL per paradigm is that it leads to chaos. Because if I can just decide what’s good and what’s bad, and then I’m just like, well, I’m sorry, but I’m just gonna decide that like, sitting on the sofa and watching TV is like, you know, it’s good and that’s fine, and I’ll just be happy forever until, you know, kind of, I, I die of hunger and thirst or <laugh>, right?
Chris 00:45:39 I don’t mind cause I’m having a great time sitting on the sofa. So wouldn’t, you know? But then, you know, that, that, that is the problem, is that in, so, so in reinforcement learning, you know, kind of, um, we, uh, we often distinguish between like extrinsic and intrinsic rewards, right? And the, the salvation of the RL paradigm is that, you know, kind of in, we can still think about rewards in, in the natural world, but those rewards must be intrinsic, which is that they must be generated by the agent, not by the world. And the question is computationally, how did we evolve constraints that shape those intrinsic rewards so that our behavior is still adaptive, even though we are deciding what’s valuable and what’s not. And so, you know, kind of, I’m not saying that RL is wrong, it’s just that the simple, kind of, the simple story about extrinsic rewards and that map onto the video game or the, the, the chess playing or go playing example just doesn’t work. You need a, you need a different paradigm,
Paul 00:46:47 But you don’t think that to have an intrinsic reward that an agent needs to be, quote unquote, truly autonomous. And, and then we get into the territory of, well, is that even possible if they’re living in a simulation that we, uh, programmed through the computer that we built and programmed?
Chris 00:47:07 Yeah. Well, this is, this is the difficulty, right? Which is that, you know, kind of one of the reasons why it’s nice to control the agent’s behavior with extrinsic rewards is that the researcher has really tight control over the objectives, right? In theory, right? It’s like you can say, you know, kind of, this is what I want the agent to do, right? Within limits. I mean, the agents can still learn to do things that you weren’t expecting, right? Um, that’s, you know, you have these things called specification gaming or, or whatever. Um, but if you let the agent make up its own reward function, then you’re sort of in unknown territory, right? How do you do that? The agent then is gonna learn to do whatever, and then that’s where you get into issues of safety and, you know, kind of, um, oh, because, because if the agent just, you know, if you, if you build an intrinsic function, like you say, oh, I just want my agent to be curious, right?
Chris 00:47:59 You know, if you, if you, if you have a, if you have an extrinsic reward, which is like, you know, kind of, please win this game, or please eat lots of apples, then you can be pretty sure that the agent is gonna do something like that, right? Um, that doesn’t mean there aren’t safety concerns there, as, you know, many people have reminded us. But, but the safety concerns, if the agent is making up its own value function, are much more severe or trying to satisfy something like, like, let’s just have control over the world, right? Mm-hmm. <affirmative>, then it’s like, oh, okay, <laugh>, you know, there’s lots of ways to have control over the world. Some of them might not be so good for people,
Paul 00:48:31 But, okay, so this, this kind of, uh, harkens and we’ll go down the reinforcement learning, um, path a little bit more here and then, and then we’ll jump back out. Uh, you know, you mentioned the reward is enough hypothesis, which was a paper written a few years ago. I think the title was, reward is enough, basically, you know, as, as a, um, uh, siren call that, uh, all you need, all you we really need for, uh, true ai, quote unquote, um, is, is reward. But then that gets in the definition of what a reward is. Um, and you were saying there aren’t rewards separate from objects, and then, then you can make the concept of so deflationary, right? So if you, if you say, instead of programming and reward, you give it in some intrinsic <laugh> reward by giving it curiosity, right? But then you can just call curiosity reward, and the, and then the word reward becomes meaningless.
Paul 00:49:26 So, um, I don’t, I don’t know what, I guess I’m just asking for your comments on, you know, you argue against the idea that reward is enough, but, but you were just saying that to give an agent intrinsic reward, um, you know, we could go into chaos, but then I don’t know what intrinsic reward is, if it’s curiosity or if it’s control. Yeah. And or whether, because you’re still programming, programming it to maximize something, there’s still an objective function that you give it, so it, it, it’s not its own objective function, right? The agent can’t come up with its own objective function question mark.
Chris 00:50:02 Yeah. So, so clearly these issues are the, these issues are sometimes a bit hard to wrap your head around, right? So I think that that paper is actually making a claim, which I, which I don’t agree with, but I think it’s a slightly different claim from what it actually says in the title, right? Of course, the title maybe is a little bit of hyperbole. What, what the claim really is, I think, is that a, of course, you can define anything as a reward function, right? So you can say, okay, well would, you know, if we, if it’s intrinsic rewards, then, you know, kind of maximizing my knowledge or, you know, doing really good prediction in my generative model, right? These are my intrinsic objectives and they’re reward. I can write ’em down as a reward and then I maximize that and, you know, kind of everything.
Chris 00:50:46 So, so, so the argument there is a version of that claim, which feels unfalsifiable, right? Mm-hmm. <affirmative>. But I think what they’re actually saying is that that reward function when you write it down will be simple. That’s the nature of the claim in that paper, is that it’s simple enough that you could, as a researcher, as an AI researcher, you could write it down. There’s some like magic formula, which is sufficiently compact, right? That you write down, please, Mr. Agent please, or Mrs. Agent, please will you optimize this, this, this, this, and this. Right? And it’s sufficiently simple that by just doing that, you get intelligent behavior. And you don’t have to, this is an example I give in the book. Like, you don’t have to then, you know, go through the world, like the games designer of Atari games go through like assigning rewards to particular states like eating actions or what, uh, eating apples or whatever.
Chris 00:51:43 You don’t have to go through the natural world as an AI researcher and like, you know, assign 10 points for seeing a beautiful sunset and, you know, 30 points for, um, you know, I dunno, going to the football match or whatever. Um, because that would just, you know, because obviously nobody has time to <laugh> to do that, right? And actually it would take, it would take far more time to design a reward function in that way than it would just, you know, do it kinda to build a stupid that does it <laugh>. Yeah. Yeah. So it’s a claim that paper makes a claim about the simplicity of the reward function, which I think, you know, is a, it is a defensible claim. And the, you know, people who wrote that paper very, very, very accomplished. And, you know, researchers who thought very deeply about this problem for many years. So it shouldn’t be dismissed. I personally do not think that, um, you know, kind of, I do not think that there is a simple, um, extrinsic or a simple reward function of any form that could be written down the maximization of which is just going to inevitably lead to intelligence.
Paul 00:52:51 Do you think that we are not algorithmic then in that sense that, you know, as, as we develop, right, our reward functions change, and as when we walk in a new room and there’s a new problem to solve, and there’s only a screwdriver to solve it, you know, is it an algorithm that determines, hey, that’s a metal screwdriver, so I can connect these two wires to turn the light on, because the function of a screwdriver changes, uh, given, so I’m, I’m harking it back to you now, Stuart Kaufman’s example of, um, the next adjacent possible, I think it’s called, where, uh, essentially the argument is that we’re not algorithmic because, um, in a given situation, everything that we’ve learned in the world, these static concepts, uh, through, you know, computational, uh, steps, you, you, you still wouldn’t know. How do you, A screwdriver could still be used for a new purpose in a different setting, right? And in that sense, it’s fundamentally not algorithmic, and your objective function then has to change essentially to be able to use that screwdriver. I’m kind of mixing words here, but anyway, you know, are you, are, are you suggesting then that we would not be able to write down one equation, however complicated it is that would lead to intelligence via the reward is enough hypothesis?
Chris 00:54:07 I, I think I do agree that we can’t do that, but I would maybe put it slightly differently. Please. I mean, I’m not quite sure I understand what you mean by algorithmic, but maybe that’s, cause I haven’t read Stuart Kana, it sounds like I should. Um, but the, um, you know, kind of, I think there’s another way of putting my perspective, which is, you know, kind of in, in all of machine learning, we use an optimization paradigm, right? So there’s some objective which is specified, right? And then the agent gradually converges towards that objective or not right? If things go wrong, but, you know, kind of ideally they converge towards that objective. And then, you know, kind of almost irrespective of which paradigm you’re working in, there’s an assumption that they do. So in a general enough way through the heterogeneity of the training set, the diversity of their experience that you can then, you know, kind of take them and you can block them in new and different environments and they will, you know, kind of, they, they will behave with similar levels of performance.
Chris 00:55:10 Right? But, you know, kind of what, what this assumes is that the agent is somehow going to be able to sample, you know, kind of, um, re to, to take many samples from the training, distribution and model, the training distribution adequately, where the training distribution here corresponds to, you know, kind of the set of experiences which it would be possible to have, right? Mm-hmm. <affirmative>. And I think that that’s just not how it works in life, right? Because many, cuz we don’t stick around for all that long, right? You know, um, you know, we only get to have one life and, you know, for most people that life comprises only a tiny, tiny, tiny subset of the possible experiences that we could have, right? And so that whole sort of, you know, there’s a global optimum for life and we’re all just trying to optimize towards it. I think that paradigm is just not really applicable to the natural world. I think instead what happens is that we sort of make a lot of stuff up on the fly. So, you know, I really feel like what we do is we, we optimize very, very locally, like very, very locally and in a, in a really kind of like, forgetful way a lot of the time, right?
Paul 00:56:32 <laugh>? I do, yeah. We
Chris 00:56:34 Get good, we get locally good at some stuff, and then, you know, we, we go off, we formulate some views or we learn some policies and we put them into practice, but then things change because the world is really open-ended, right? And Yeah. Yeah. You know, we’re not that good at dealing with change, right? We don’t behave like an agent. You know, I, when I visit a new city, if I’m adept at behaving in that city, it’s not because I have such a diverse set of experiences of all the possible things that a new city could throw up that I can just <inaudible> zero shock. Rather, what happens is when I go to a new city, I use, like, I have a set of like, reasonable, but very, very blunt priors. And I sort of rapidly assemble them to kind of, you know, just about get by. Probably misunderstanding, um, a lot of staff and probably, you know, being incredibly inefficient. And then, you know, then I go home and I probably forget most of it, and then I solve another problem. So the whole like optimization towards convergence paradigm, you know, I just don’t think it works all that well in the natural world. Maybe you could think of it working for evolution, but for, for learning over the course of a single lifetime, a single individual. I just don’t think it’s all that useful.
Paul 00:57:49 You also, when you go to a new city and you haphazardly get by, you end up at a coffee shop. Um, and whether it’s, maybe it’s okay and then later you tell yourself, oh, I did that. Well when it was perhaps, uh, you know, we lie to ourselves. And you, you were kind of alluding to, I think it’s Nick Chatter’s book that you talk about in your book about how I don’t, uh, a flat, flat mind, right? Um, we have flat minds. I is flat. Yeah. The mind is flat, which is a section, um, in your book. So I like that example. Um, but, but you do, so a few times in the book you talk about, you say this is one of the biggest challenges in, in AI these days, and one of those is finding the right mix and match of reward functions. Um, I don’t know if you wanna just say a few words about that because, uh, given that we were talking about reward, uh, reinforcement learning.
Chris 00:58:40 Yeah. I mean, you know, I think this is an interesting question, maybe not even for AI research, but it’s an interesting question for neuroscience, right? Mm-hmm. <affirmative>, like to think about, you know, kind of RL clearly has given us a useful paradigm for thinking about learning, you know, even if the caveats that I just raised, you know, kind of a apply. But I think it’s, it’s, it’s useful to think about what what should we actually optimize for, right? You know, kind of what, what, what are the, what are the objectives, you know, kind of in a world in which we
Paul 00:59:11 To win, to win, win win, that’s the objective, right? <laugh>.
Chris 00:59:16 Yeah. You know, I mean, it’s, it’s interesting, isn’t it? You know, you think of the, the other thing about rl, let me say something else that I think is relevant, which is the other thing about RL is that, you know, that’s a bit, a bit weird in the way it’s typically implemented and, and used is that rewards when you get them, they’re not fungible, right? So an asset which is fungible, means that you can store it and disperse it at a later time, right? So like money, right? So I can put it in the bank and then spend it later, but that, that doesn’t happen in, in, in, like if you, if you think of an A D R L agent playing Atari, right? It gets reward, but the reward just goes to its score, right? Mm-hmm. <affirmative>, it’s the, the reward like augments its score and it like updates its value function and, and so on.
Chris 01:00:00 But you can’t, like wh when I, um, wh when I, um, encounter assets in the natural world, then, you know, kind of typically a, an action which is available to me is to kind of store them in some way so I can use them. This happens even with like primary reinforcers, right? So if you think of what food does, right? Food isn’t like something which is like instantly dispersed pink, right? What it does is it, you know, improves your calorie, you know, it has calories, which like, and, and it has nutrients, which like, you know, kind of your body uses over various different time scales to support itself, right? So it’s disbursed in that way. The same with, you know, kind of, uh, um, economic, um, assets, right? They’re, they’re used gradually over time. So, you know, kind of, there’s this notion that, that what, what, what actually matters in the, in the world are there are, um, inputs which change our bodily state in various ways.
Chris 01:00:59 That’s what I do when I’m eating an apple. I’m changing my bodily state and, you know, in r l that’s all in the world. And you could say, okay, yeah, cuz maybe that’s a state which I can observe and so on. And, and you know, cl clearly everything, you can always shoehorn everything into an r l paradigm if you want to. But like, I think that the, the, the, the point here is that, you know, kind of, we need to think more in neuroscience about what it means to learn those actions, which to learn to, or sorry, to learn, to learn rewards for observations as a function of their consequence for our bodily state, right? And that then you suddenly get into all sorts of interesting territory, like, you know, kind of, you know, about mental health, you know, kind of, because of course, you know, sort of failures to appropriately learn intrinsic rewards for states.
Chris 01:01:50 You can imagine that as being, you know, into being tied to, you know, various kind of psych disorders. Mm-hmm. <affirmative> maybe, maybe depression, and it’s comorbidities. Um, you know, we also need to learn things like intrinsically, we need to control. We need control, right? So control, it’s very useful because it tells us that when we’re being efficacious and when we’re not, right, this is something really, really important. Um, and, you know, uh, similarly we need to learn, um, policies which, which exert the right level of control. Right? And there’s a lot of evidence that, you know, kind of having that control is something that really ties heavily into a sort of, you know, kind of healthy mood states and, uh, healthy psychology. Of course, if you, you know, you, you seek to have too much control or too little control, then that can be tied to pathological states, right? So, you know, kinda one way of thinking about, I’m not an expert in psychiatry or computational psychiatry, but you can see the connection to things like obsessive compulsive disorder, for example. So I think these things are really interesting and they’re not very deeply explored in from, from that computational perspective in psychology and neuroscience. And I think that’s a, that’s a big opportunity.
Paul 01:03:01 So that’s what you mean by the right balance and mix of, um, the right reward functions.
Chris 01:03:08 Yeah, exactly. Exactly. And to, you know, to try and understand what, what people really want, right? Like <laugh>, you know, kind of, it’s, it’s, it’s a, it’s a silly question, but like, what do people want
Paul 01:03:22 <laugh>? That’s the problem is we don’t know. You know, we, we can’t really define intelligence outside of our own problem space. And even individually, our problem spaces are different, right? But, and we can’t say what we want and what we want changes what we claim to want anyway. Changes and is that what we really want? Yeah. Yeah.
Chris 01:03:39 Well, that’s, that’s Nick Jada’s point in the mind is flat book. Right, right. Which is that we don’t really have something, he, it’s similar to my, we don’t really optimize point. Yeah. And, you know, <inaudible>, I’m hugely, hugely influenced by, uh, reading Nick’s work and, and also talking to him about it. You know, he’s, he’s such a, an inspiration for, for me. Um, but yeah, he says we don’t really have a value function, and we just kind of, we sort of make up what we want.
Paul 01:04:02 Isn’t that pessimistic?
Chris 01:04:04 Highly true. Like, you know, we probably, you know, if you were water deprived for two days, you know, you probably really would want a glass of water and, you know, people, you know, really do care about each other. Um, you know, and, and there there are things that we really, really do, do preferences that we really, really do have, but a lot of our preferences are just kind of arbitrary.
Paul 01:04:26 Esp Yeah. Especially if we’re all satiated and, um, you know, happy and healthy, you can make up whatever preferences you want. Uh, yeah, exactly. Dictators.
Chris 01:04:35 But, but even, even in those, even in states of need, right? I mean, you know, people will go to extraordinary lens. Yeah. You know, kind of, people will, will go to, they will suffer extraordinary probation in pursuit of some like abstract ideal. Yeah. Which they’ve just kind of, well, I mean, you could say it’s made up. I mean, you know, it’s made up in a sense, right?
Paul 01:04:55 <laugh> Well, we don’t know, right? That’s, that’s, it’s a problem with the ontology of the universe, I suppose. And our, our now we’re, yeah, I’m pushing us a little too far. This
Chris 01:05:04 Questions are both my great, you need to get a philosopher on here to answer this. I’m sure many.
Paul 01:05:09 Yeah. Well, no one. Yeah. Yes. And they all have the, the correct answer. Um, <laugh>, uh, all right, so I’m gonna jump, jump now again toward the, uh, end of your book when, you know, so before we were talking about reinforcement learning. We were talking about the nature of intelligence and the nature of the problems that we’re solving. And I was questioning whether, you know, an AGI could ever have, you know, our intelligence, because the nature of their problems are just gonna be different, and why would you create another human number? Eight, 8 billion, 7 million, blah, blah, blah. Um, but you talk about the notion of going, you know, beyond our brains, right? Uh, building intelligences that are superior to our brains. Not, not human, like, necessarily not human, uh, level necessarily. Um, but, but moving beyond. Um, and whether that’s wise, and, you know, of course there’s all sorts of ethical things that you touch on, um, when you write about that as well.
Paul 01:06:08 But I would, I’d just like to ask you like, what you, you know, what, what, for, for me, having read your book and thinking about these things, um, uh, some of the things that I’ve been mentioning, that idea does not sound unappealing, nor does it sound impossible given that, um, we have our own faults and errors, and presumably we wouldn’t need to build an AI system with those faults and errors. And even if it’s a, an intelligence radically different than ours, that we don’t necessarily understand if it’s solving problems in a problem space that is not our human problem space, but is a problem space that we want to enter into. I don’t, I don’t see, except for the paperclip destroyer example, I don’t, you know, I don’t see what, uh, why we wouldn’t do that and why wouldn’t be possible.
Chris 01:06:58 Yeah. So, so I mean, the, there are many answers to this question. So the first one is that, that in narrow domains, of course, we have already built things, right?
Paul 01:07:05 Are better than humans calculator
Chris 01:07:06 In many ways, right? Yeah. Your my calculator is better at maths than me, at least, at least a calculation. Um, AlphaGo is better at go than everyone. And you know, my bet is that chat g p t is probably more knowledgeable than any other that exists on the planet right now. Yeah. That doesn’t mean it’s always right. Um, as people have been, you know, obviously quick to point out, but it probably is on balance, like better at answering general knowledge questions than any of the other 8 billion humans. I don’t, I I think that’s probably a defensible claim. I might have to think about it a bit more.
Paul 01:07:38 Well, we all, another mention another point that you mentioned is that we always compare it to like the super expert, uh, in whatever domain we’re trying we’re talking about, right? So we would compare a language model to Hemingway or something like that, and not the average person. So,
Chris 01:07:53 Yeah, no, for sure, for sure. Um, so, so the question, you know, kind of, we, we, we rapidly get entangled in questions of like, what do we want and what, what’s right and so on. Um, so there’s many, there’s many ways to answer this question. I mean, one is to counterpoint two different approaches, which I counterpoint in the book a little bit. And this is a sort of, you know, in the beginning you asked me this question about, you know, kind of neuroscience and its role in ai, and there’s a sort of, there’s another way in which psychology and neuroscience are relevant for AI research. Um, and it’s, it, it it’s around the, the role, not of like theories of psychology, but the role of like human data and human data collection. Okay? And this is a counterpoint to, you know, we talked about like, you know, kind of how if you wanna, um, build something that plays go or chess really well then, you know, kind of, you can use rl.
Chris 01:08:48 It’s a very clear problem. And actually, you know, what we learned in the process of solving, uh, go, for example, was that you can do better if you stop using human data. Mm. So part of the l is enough claim is also about that, right? Mm. It’s sort of saying like, you don’t need to copy people, you just need to get the reward function, right? And then everything else with enough computation and blah, blah, blah, um, will take care of itself, right? Mm-hmm. <affirmative>. And so, you know, and that’s exemplified by, you know, kind of the, the story of AlphaGo, which, you know, many people would be familiar with, but, you know, kind of basically, you know, AlphaGo originally was built by, you know, there was, there was a lot of, it was bootstrapped off a lot of, um, supervised training from human play, essentially, right?
Chris 01:09:43 Um, and, you know, kinda a lots of go games scraped from the internet and like, you know, kind of predictions about which moves were likely to lead to, which moves did lead to a win condition on how the humans play. And then later the team, you know, kind of got rid of that, um, that human element and just built something that played with itself, and there it was able to do even better, right? And that, that’s the foundation for the story that, you know, kind of what we, we, we, we don’t want humans in the loop at all, right? But I think that the advent of language models, large generative models and so on, have really pushed our thinking back in the opposite direction. So it’s back towards the idea that actually what we really do want is something that learns to be what humans want.
Chris 01:10:35 So in other words, the ultimate reward function, it’s not like you write down something clever, like, oh, be as curious as you can, or, you know, kind of be as powerful as you can or whatever, and you just let it loose, and it takes over the world actually, what, what you want is an agent, which is exquisitely sensitive to social feedback. And lo and behold, just like we are, right? So, you know, kind of the really hard thing about being human is that, you know, it’s difficult to know what other people want and what other people think because it’s not written on their faces, at least not, you know, very clearly. And so, very often you say things that offend people or that you know, kind of, that other people don’t like, or you miss, you fail to anticipate their needs or their desires or their beliefs, and you get stuff wrong. And that, that’s what makes it really hard to be a person, right? So we learn over the course of development to be really, really sensitive to social feedback from one another.
Paul 01:11:37 This seems somewhat related to inverse reinforcement learning. Uh, if that is that, uh, Stuart Russell’s inverse Reinforcement Learning, where the idea is to learn the preferences of the human, that’s kind of a related idea. Well, it’s
Chris 01:11:48 Related, it’s related to value alignment in general. Yeah. Yeah. Right? So, you know, kind of the objective, I think we’re increasingly realizing that the, you know, if we want AI that people will actually use and engage with in the way that, you know, kind of people really, you know, kind of engage with and, and are, are starting to use these language technologies for a range of different applications, if we want that, then, you know, kind of the, the signal that we really care about is the social feedback. And, you know, kind of, that’s not something which was a priority obvious, I think, you know, kind of throughout the history of AI research, deep learning research in which it was always the researcher that set the objective, right? Hmm. But, you know, we’ve been talking about throughout this conversation, we’ve been talking about how, you know, the natural world doesn’t tell you, you know, what’s good and what’s bad and whether you’ve won, and, and the, that that’s true to some extent, but social others, right?
Chris 01:12:49 Social others provide you with dense feedback, which is really, really important, and which we have learnt to interpret in ways which, you know, are obviously hugely consequential for how we behave, right? Mm-hmm. And so, yeah, so this is, I mean, this is what is happening, right? So, you know, the new method, which people call R L H F re reinforcement learning from human feedback is like taking over. It’s the, you know, learning to do this well is, is what everyone is, you know, kind of excited about right now. I think, um, because this is the answer to building models which are gonna like really be, you know, be be preferred by people and be useful to people. Now, that doesn’t mean you sidestep all of the thorny problems around value alignment or like fairness, for example, because, you know, as soon as you say, okay, well we’re gonna, our, our agent is gonna be trained to satisfy human preferences, it’s like, well, whose preferences, right?
Chris 01:13:49 So, you know, you don’t want it to be, is it gonna be the preferences only of like educated western people? Um, you know, how do we know that the, the preferences of like historically marginalized groups are yeah, properly weighted in this What happens when people want different things, right? What happens when you have like majority minority and uh, uh, dynamics and you know, you have a dissenting views and you know, who does the agent satisfy? So, you know, kind of this is, I think, where the action is right now. Um, and so I don’t talk about this really very much in the book, um, but this is actually the, the research that my group does deep. And so, you know, for many years we’ve worked on these problems of like value alignment, group alignment, preference, aggregation, um, and, you know, I’m really gratified that now that we have good models like in the wild or you know, kind of almost in the wild that these issues are kind of coming to the fore, um, because there are, you know, maybe you should think of another podcast, which is like, you know, kind of about the relationship between AI and social science, because, you know, kind of, I really feel that that is as exciting, uh, a frontier as the one, you know, the, the membrane between neuro and ai, which you’ve done so much to, um, you know, to, to, to highlight,
Paul 01:15:10 Oh, someone should do that podcast. I cannot, I can’t fit it, man. <laugh>, you, you said you didn’t really talk about that much in, in the book. Another thing that you, so I’m, I’m aware of our time and I have like this long list of questions, so I’m trying to like sort of prioritize what to ask you. And one of the things that I wanted to ask you, we started talking off talk. We started off talking about, you know, large language models and foundation models, which you allude to in the book a lot, but when it comes to like the Transformers, um, and the, your, your book is, you know, largely all about comparing, uh, AI and neuro and inspiration back and forth, and how we can use neuro for ai. But then you kind of have a one-off line where you say, well, it’s really not clear yet how these awesome new models compare to, um, the way to our psychology and our neuro and our brains. And there had, you know, there have been a few studies comparing like, um, like FM r I and, uh, EEG and that we are, uh, we are prediction machines just like the transformer models. And so it’s kind of early days in that. But I, I wonder if you have more thoughts on that since, uh, publishing the book or, or just, you know, if you, if you think Transformers are getting up further away from our brains, or if they’re actually closer to our brains in psychology, then uh, then we are appreciating at the moment.
Chris 01:16:26 Yeah, great question. Thank you for asking that. So yeah, you’re absolutely right. So people have done, yeah, um, um, really nice work taking transformer models and comparing to the brain sort of somewhat by analogy with the way that people sort of compared convolution neural networks in the ventral stream system. And behold, there are commonalities and, you know, you might say, oh, that’s expected because, you know, they’re both predicting the same thing. So, you know, presumably it’s the semantic information being predicted that is the root cause of the shared representation. Um, so maybe it’s not that surprising, but it’s not to detract from the importance of that work. I think it’s been a really nice, uh, brick in the wall. Um, but you are asking a more, I think a deeper question, which is like, why the hell are transformers so good? So nobody knows why transformers are so good, but they are so good, they’re really, really, really good.
Chris 01:17:16 Um, and you know, kind of it’s clear that the step change that has happened since 2019 in generative models is not just about scale, it’s about the transformer. Um, so here’s a theory, and this is, you know, kind of, it’s, it’s sort of my theory, but it’s really heavily inspired by, um, uh, other people in particular, the work of James Wittington and Tim Barrons, who I have thought about this very carefully and probably in more detail than me, um mm-hmm. <affirmative>. And, you know, if you think about what a transformer does, what it does is, you know, it takes information, you know, in, for example, in a sentence. And, you know, rather than receiving bits of information one by one, you know, what it does is it takes a chunk of information and it kind of buffers it. And what, what it learns are essentially, it essentially learns kind of a set of, it’s called, it’s called self attention, but it essentially learns a set of like cross mappings between the information which is provided in the sequence and critically the positions in the sequence that that information occupies, right?
Chris 01:18:26 So you actually have the transformer explicit representation of not only the contents, not only what is in the sentence or the image or whatever, but where it’s, right. Mm-hmm. <affirmative>, and this is very different from like an L S T M because in L S T M time is ti, time is there, but it’s implicit, right? You just get the things bit by bit in time. You never learn to represent time itself, right? But the transformer actually affords the opportunity to represent position in the sequence, which if you’re doing language going forward, is time, right? Or, or space in an image. So what we have suddenly is no,
Paul 01:19:03 A transformer is not recurrent, like technically,
Chris 01:19:06 Well, um, sorry, yeah. I mean, depends on how you define recurrent, but it certainly does not have the right, certainly does not have it, it does not maintain information through recurrent dynamics in that way. Right? Sorry
Paul 01:19:21 To interrupt. Yeah,
Chris 01:19:22 No, no, you are, that’s absolutely fine. But anyway, but I think that the, the, the key element here, what, what I think is really important is that you have explicit representations of contents and position, which we might call like structure, like where is everything in relation to everything else? So why is that important? Well, it’s important because, um, several people including Tim and, uh, Tim Barrens and, um, you know, a number of other people you could probably credit, uh, Christian Dola with some of these ideas. Um, you know, and my group has, has very explicitly kind of made this this claim as well, which is that it is the explicit representation of space in either a navigational sense or a kind of like per personal space. Like, you know, the space that’s around you, that explicit representation of space, which is really important for doing kind of complex forms of reasoning.
Chris 01:20:12 And the argument goes something like, like the following, right? It’s a little bit dense, but let me see if I can explain it. So I’ll explain the theory that we articulated, which is not in terms of allocentric space or navigation, it’s in terms of like reaching and grasping. So, you know, imagine that you have, imagine you wanna learn a very simple abstraction, like the concept of three, right? So three, that what does three mean? Three means that there are three objects in a scene, right? Okay. So we understand the concept of three. And what I mean by understand the concept of three is, what I mean is that if you saw a new scene that had three objects that you had never seen before, right? Completely new objects, you would still know there are three of them, right? So that means that your knowledge of three is independent of like the contents of this thing.
Chris 01:21:02 Mm-hmm. Right? Quite an ex, quite an explicit way, right? And so how, how do you actually learn that? It’s really hard to think about how a neural network would learn that, because what neural networks learn, of course, are just statistical mappings between inputs and outputs, right? So, you know, kind of, it’s hard to know if you take three new and completely different objects, they would be out of distribution. So, so how would you ever, how would you ever learn that? Well, one argument is that what you do is you use the way in which you interact with a seam, so your action as an input to the system. And by using that as an input to the system, so by explicitly representing how you can move, what you’re doing is you’re effectively constructing space, like three objects only exist in three different locations, because you can move what it means.
Chris 01:21:54 What that means is that you can move to them and pick them up in different places or move their eyes to them, and those are different locations that you are, um, that you’ve moved your eyes to. So the idea is that our knowledge of abstractions is formulated through the actions that we take in space. And it’s by taking those actions in space that we learn an explicit representation of space, it means we actually have codes for space, which is of course, primate. That’s what’s there on the dorsal stream, right? You know, we have these things called salience maps, which are intricately tied with, you know, kind of like spa the, the representations of where objects are in space. So that explicit representation of space, we argue is really important for like, kind of basically being able to, to reason about objects and being able to compose scenes from different combinations of objects in different, um, s spatial positions.
Chris 01:22:51 And it’s, you basically, the pre the pri range factor rises what things are and where they are really, really explicitly. And so the argument is in the transformer that is the explicit representation of item and position that in the same way gives it the ability to do that amazing composition. So if you ask deli for, you know, a blue tomato, um, you know, kind of on top of, um, a, a pink banana, then you know, it will generate an image that has that property, right? And, you know, as far as we understand, like non transformer based architectures can’t do that. And I believe that it’s because the explicit representation of contents and position allows the network to kind of factories what and where, in the same way as the primary brain factor rises that information in the do slam ventral stream,
Paul 01:23:47 We’re, we’re leaving, sorry, we’re leaving out when in that account, right? So, you know, I’m thinking you were mentioning it when you take action, I thought you were also gonna start talking about the timing of those actions, but in a sense, time and space could be considered, uh, analogous, right? So like reading a transformer doesn’t read the way that we read, but there is a timing, and so we have that order also.
Chris 01:24:09 Yeah. Sorry, I sort of garbled my examples, right? So, you know, kind of, yeah, the transformer is easiest to illustrate by thinking about a sentence, but like, you know, kind of the, the examples of composition are like most vivid when thinking about like image generation, right? And in image generation, yeah. Yeah. Position space in a sentence, position is time in the primate brain, or at least in the human brain, you know, kind of, it seems more like when you really need to do like time over longer periods, like actual, you know, kind of clearly we represent the dynamics of action or the, you know, we know that neural signals in the pral cortex, in the dorsal stream, you know, build up in concept with the dynamics of action. We know that pral cortex is really important for really short, like, visual short term memory or iconic memory, like short, uh, representation, you know, kind of in between two cades, for example. But the sort of representation of time in a sentence, in the, uh, pro in the human brain requires a prefrontal cortex, right? Which is why, of course, right? You know, kind of, you have disordered actions, sequences and disordered language if you have lesions of the, um, various parts of the prefrontal cortex. So I think that the time, it’s a similar story, but in humans, you additionally need the prefrontal. But yeah, same principle,
Paul 01:25:24 Not, not to, um, you know, sometimes the way that I frame questions, it, I’m sort of, uh, it’s like I’m advocating some sort of AI versus neuro, um, claim to foundational ideas, right? Because what I was gonna ask you is in let’s say 10 years, if you were gonna rewrite artificial natural intelligence, would, would you be able to include a story about how transformers, um, map onto and or were inspired by neuroscience? And I ask because well think, I don’t think they inspired, right?
Chris 01:25:57 I don’t, I don’t think they were inspired by neuroscience. Um, so I gave you lots of examples of, you know, kind of where, where ideas kind of Coe emerged. So you might say that maybe there’s a comergence of these ideas in neuroscience and, um, and AI research, but you know, kind of the transformer, you know, I, I do not think the authors of the transformer paper, you know, kind of no, read a neuroscience textbook and then we’re like, Hey, this is a great idea, <laugh>. It’s definitely not how it happened. <laugh>. Um, although, you know, yeah, no, sure, but may, maybe you could say the ideas are there in the ecosystem. Um, but y but yeah, I mean, I think we will know more in, in 10 years time, hopefully before then we will know more about that process of factorization and composition. It’s already a sort of pretty significant research topic in computational neuroscience. And, you know, I’m fairly sure that, you know, we’re gonna learn more about that and, um, that’s gonna be really fruitful. And maybe the transformer’s gonna be, you know, yet another tool from AI research that, you know, gets co-opted and used to understand the brain. Uh, it wouldn’t surprise me in the least.
Paul 01:27:05 All right. So I’ve, uh, taken us to the limit here. Um, I really appreciate your time. I have one more question for you. And earlier you mentioned that you didn’t wanna call the book Natural General Intelligence, uh, and yet the book is called Natural General Intelligence. So why is it called that? And, and did you have an alternative?
Chris 01:27:22 No, I just have a limited imagination, and eventually I gave up looking for other time,
Paul 01:27:26 <laugh>. Okay. Great answer. All right. Well, I really appreciate, uh, the book and, and our conversation today. Uh, thanks for coming on again, and I guess I’ll see you in a year’s time or so, perhaps
Chris 01:27:37 <laugh>, uh, have to write another book then. Um, but yeah,
Paul 01:27:40 Get on it.
Chris 01:27:41 Thanks. It’s always a pleasure.
Paul 01:27:59 I alone produce Brain inspired. If you value this podcast, consider supporting it through Patreon to access full versions of all the episodes and to join our Discord community. Or if you wanna learn more about the intersection of neuroscience and ai, consider signing up for my online course, neuro Ai, the quest to explain intelligence. Go to brand inspired.co. To learn more, to get in touch with me, emailPaul@brandininspired.co. You’re hearing music by the new year. Find them@thenewyear.net. Thank you. Thank you for your support. See you next time.