Brain Inspired
Brain Inspired
BI 142 Cameron Buckner: The New DoGMA
Loading
/

Check out my free video series about what’s missing in AI and Neuroscience

Support the show to get full episodes and join the Discord community.

Cameron Buckner is a philosopher and cognitive scientist at The University of Houston. He is writing a book about the age-old philosophical debate on how much of our knowledge is innate (nature, rationalism) versus how much is learned (nurture, empiricism). In the book and his other works, Cameron argues that modern AI can help settle the debate. In particular, he suggests we focus on what types of psychological “domain-general faculties” underlie our own intelligence, and how different kinds of deep learning models are revealing how those faculties may be implemented in our brains. The hope is that by building systems that possess the right handful of faculties, and putting those systems together in a way they can cooperate in a general and flexible manner, it will result in cognitive architectures we would call intelligent. Thus, what Cameron calls The New DoGMA: Domain-General Modular Architecture. We also discuss his work on mental representation and how representations get their content – how our thoughts connect to the natural external world. 

0:00 – Intro
4:55 – Interpreting old philosophy
8:26 – AI and philosophy
17:00 – Empiricism vs. rationalism
27:09 – Domain-general faculties
33:10 – Faculty psychology
40:28 – New faculties?
46:11 – Human faculties
51:15 – Cognitive architectures
56:26 – Language
1:01:40 – Beyond dichotomous thinking
1:04:08 – Lower-level faculties
1:10:16 – Animal cognition
1:14:31 – A Forward-Looking Theory of Content

Transcript

Cameron    00:00:03    One of the places where I think the, the recent machine learning can really inform the philosophical debate is by filling in some of these ellipses, uh, uh, you know, getting rid of this width of magic, uh, to some of these key appeals that the empiricists have made in the past and showing how, you know, vaguely brain inspired computational mechanism could actually do some of these operations. And I think that that’s the sense of wonder that a lot of people get, you know, all the, all the critics of deep learning, almost love to, uh, then paint, uh, deep learning as being the contemporary reincarnation of psychological behaviorism. And this is just totally wrong. <laugh> to understand how this, uh, mischaracterization started. You know, I think you have to go back to the beginning of cognitive science.  

Speaker 0    00:01:03    This is brain inspired.  

Paul    00:01:16    Hello everyone. I’m Paul. On this episode, my guest is Cameron Buckner, who is a philosopher and cognitive scientist at the university of Houston. And today we discussed two main topics. Cameron has been working on the first centers around an age old debate in philosophy that has continued into modern questions about how to understand natural intelligence and how to build AI artificial intelligence. And Cameron is in the final stages of a book all about this and the debate, um, goes by many names like nature versus nurture rational versus empiricism. But the question revolves around how much of our intelligence is learned from experience versus how much is innate or in AI. How much should deep learning models learn from scratch versus how much should we simply build in? Cameron’s take on this debate is roughly that we should ask what are the handful of psychological faculties that we possess perception, memory, different kinds of learning, and so on.  

Paul    00:02:17    And importantly, how might these faculties be implemented in a general and flexible enough way to work with each other as interacting modules to build up to something that we would call an intelligent system, something like what Chris, a Eliah Smith is working on with his spawn cognitive architecture, which we discussed in episode 90. So a lot of Cameron’s work focuses on how to think about mapping our psychological faculties onto all the different kinds of deep learning models, uh, coming out like convolution, neural networks, transformers, and so on. And that we can use these deep learning tools to essentially test what are the right handful of variations of architectures and learning, learning rules, and so on that need to be built. So as to work well together, that’s what we discuss in the first part of the episode. Then we switch gears a little and talk about Cameron’s recent work on another age, old topic, how our mental contents, our thoughts, beliefs, desires, uh, are connected to the things that they are contents about how our mental contents are grounded in the world.  

Paul    00:03:23    So without going into much detail here, uh, Cameron proposes, our mental contents are grounded by our brain’s predictions of what’s about to happen. And more specifically about the learning that takes place when those predictions need to be updated due to error. So these are both huge topics, and I encourage you to learn more by diving deeper into his work, which I link to in the show notes@braininspireddotcoslashpodcastslashonehundredfortytwoonthewebsitebraininspired.co. You can also learn how to support the podcast through Patreon or my online course about the intersection of neuroscience and modern AI. So please check those out if you value what I’m doing. Okay. Enjoy Cameron. So Cameron, I can tell that you haven’t been outside for the past at least, uh, I’d say 15 minutes because you’re not dripping in sweat from the hot soupy Houston summer weather.  

Cameron    00:04:20    I’m actually in Cambridge right now, so, oh, where that’s we’re, uh, we do the climate refugee thing every summer where we, we try to, uh, do fellowship somewhere. So I’m, I’m doing a fellowship, uh, this summer with, uh, the center for future, for intelligence here, that’s funded by lable. And then, uh, I’m a fellow Clare hall here. So it’s been a, it’s been a lovely day here. It was 50 fair and high when we woke up and I went for a run when it was about 60 along the cam. So don’t, don’t worry about me. I had a spectacular day  

Paul    00:04:52    <laugh> well, congratulations for, uh, having a break from the heat. Uh, well, thanks for joining me here. Um, so we have potentially a ton to talk about, so we’ll see what we get to, uh, I know that your, your sort of original background is in computer science and artificial intelligence, but you’ve, and you still are very interested in those. And we’ll talk about those things. Uh, you’ve pivoted a lot of your work to, um, philosophy to issues surrounding philosophy. Um, the, the first thing that I want to ask you in, in reading your work and watching a few of your talks, you know, you make reference to all these old philosophers, and I don’t know if this is forgive me if this is a, uh, hack need question, but I wanna know how difficult it is. So when I read old philosophy, it’s really difficult for someone like me, who’s elevator might not go to the top floor to interpret, you know, their, the meaning of what they’re saying. Yeah. And I have a sense that through history, every modern era at every step, we interpret them slightly differently. Uh, is that true? And, and how difficult is it to go back to these original sources and, and interpret, uh, what their, what they meant by their words?  

Cameron    00:06:04    Yeah, I mean, I’ve, I’ve had courses on the historical philosophers in grad school. So I had a bit of a training in the terminology and so on, but I, I, there is, there is kind of a, an initial bar to get over. That’s just regarding the way they use language differently and the way they structure sentences and so on. But I’m actually struck kind of in the opposite direction by once you get past that initial sort of Vene of unfamiliarity, how the philosophers I talk about like lock and Hume are, are really grappling with the same fundamental issues that machine learning researchers are grappling with today. For example, you know, the problems of induction, for example, you know, wrote a paper sort of about that, uh, is a question. So it is just, it’s the same question that confronts a, a machine learning engineer, which is how do I extract some general category representations from a finite amount of experience, right?  

Cameron    00:07:00    Cause you want your categories to be a, of the right kind of abstract structure, so that they’ll generalize to future data. That’s maybe, uh, slightly dissimilar to the data it’s seen in the past. It’s, it’s basically the problem of induction, you know, and if you go through lock and Humes, uh, treat us or inquiry, you see, you know, they’re talking about how do I learn causal relationships, for example, from, uh, merely sets of contingencies that I see in my past, right. And how do I learn to do geometry? You know, and how do I learn about number, these are the problems that they, they spend pages and pages talking about. It’s the same problems that, that, you know, people are, are struggling with on, on the cutting edge of deep learning today. So, you know, I, I think it, it definitely is a bit daunting when you first try to jump right into it.  

Cameron    00:07:55    And it, it sort of seems like you’re swimming in a very unfamiliar seat, but once you get past that initial hurdle, and I think also going through a bit of secondary literature can help too. You know, so finding, finding a philosopher, who’s done a bit of the interpretive work connecting the, this is one of the things that I try to do in my work, right, is to give a kind of grip sheet for the machine learning engineers who are interested in, you know, what’s what lock or Hume or William James Orsini, or whoever, uh, thought about these problems,  

Paul    00:08:26    What proportion of, uh, AI researchers is that, that are interested in those topics?  

Cameron    00:08:31    I, I mean, I think you’d be surprised. So, you know, one of the things I do in, in the first chapter of my, my book manuscript is I go through and I point to all the places where, uh, the machine learning researchers or the, you know, classical AI people, more logic rules and symbols people make direct reference to philosophers, you know, the, the Russell and nor VG textbook, which is, you know, as close to we have as like a Bible and AI, you go on the first chapter and five pages in, yeah, they’re talking about Aristotle, you know, and they say like, you know, AR this is basically an algorithm that you can read out of Aristotle and code and, you know, uh, one of your programming languages and, and build a little rational agent model, uh, and, you know, in the AlphaGo paper, you know, I don’t know if I’m jumping ahead, but probably everybody’s already heard about AlphaGo.  

Cameron    00:09:19    Uh, the first sentence of the abstract, you know, they’re saying we do this with a tabula or alpha zero, I guess it was, we do this with a tabula RAA algorithm, right? That’s a direct reference to, you know, not just lock, but the whole empiricist tradition back to, to Aristotle. And if you look at the skeptics, you know, Gary Marcus loves to, uh, invoke the empiricist rationalist debate and, and frequently invokes lock. And, you know, I don’t think the most charitable interpretation of lock, but, uh, is always pointing to, you know, this is just the kind of continuation of what lock was doing, or when Juda Pearl is, is talking about the need to build causal models into his, into computational systems. You know, he’s, he’s saying this is the, the machine learning radical empiricist perspective that he reads out of machine learning today is kind of a continuation of humane and skepticism about causality.  

Cameron    00:10:10    So I think it’s already there all over the place in the computer science, uh, you know, your point is taken that, you know, going from those quick references to the original source materials, and then trying to charitably interpret what those authors were really trying to say is where I think some of the speed bumps come in, but, uh, you know, that’s one of the things I try to do with my own work is point to some of the secondary literature that I think is interpreting that, uh, original source material and the most useful way to connect it up to, uh, the current, uh, challenges in machine learning. So I hope, I hope that helps a bit,  

Paul    00:10:48    Uh, you’re one of the first, um, of a kind of a hand, well, of a handful of philosophically minded. I, I don’t wanna just pin you as a philosopher because you’re more than that. So, you know, I was almost, I was about to call you a philosopher. You’re one of the worst pH one of the first philosophers. You’re more than that, but anyway, of a handful of people who are connecting the modern AI and, and lessons that we’re learning from modern AI to like a rich, philosophical tradition, like empiricism, um, which we’ll talk about. Um, and, and, and you often say, you know, it’s surprising that more philosophers haven’t started grappling with the progress in AI and, and the ideas and technologies that are coming out. Yeah. How do you think AI is, is shaping or affecting, you know, modern philosophy of mine, just in a, um, grand, you know, high level view? Is it, is it changing things? Because a lot of what you do is say, look, here’s this, uh, an auto en coder, and this is how, how, oh, I forget at, at the moment, you know, this is how this ancient philosopher used to talk about these faculties, uh, that, that, you know, we have mentally and, and you connect those two, but is AI sh you know, transforming philosophy at all as well?  

Cameron    00:12:09    I mean, that’s, that’s my hope. Um, I mean, it’s, it’s, in one sense, it’s, it’s changing things in another sense, a, a lot of the big pieces are staying the same, but one way that I think that it can, uh, influence the debate and philosophy for the better is in empiricist philosophy of mine. For example, there’s a number of places where, uh, lock or Hume or some, you know, William James, somebody says, well, the mind can do this thing. And it’s, it’s vitally important to my story about how we learn abstractions from experience or, or make rational inferences on an empiricist framework, that the mind be able to do this thing. And I have the slightest idea how it does it. <laugh> right. You know, hu human particulars is sort of admirable in his modesty here, where again, and again, he’ll say, you know, he’ll give you the evidence.  

Cameron    00:12:59    He, he, he, it’s, it’s sort of like he’s laying out a kind of empirical case on, on the borders that, look, we do this, we do this, we do this. So clearly the mind can do this type of operation, but I don’t have any idea how, and he even despairs that will ever be able to, uh, explain certain things like the faculty of imagination, for example, is a prime, uh, place where Hume says, you know, the mind can compose novel concepts and dream up these fantastical situations that we’ve never seen in the past. And I just despair that we’ll ever be able to understand how it does it and his rationalist critics have repeatedly just savaged him. <laugh>, uh, on, on these points, you know, saying like, it looks like he’s just, he’s just assuming some solution to a problem to, to help plug a hole in his view, uh, without really having any way to explain it.  

Cameron    00:13:49    And they diagnose a deeper problem there. So one of the places where I think the, the recent machine learning can really inform the philosophical debate is by filling in some of these ellipses, uh, uh, you know, getting rid of this width of magic, uh, to some of these key appeals that the empiricists have made in the past and showing how, you know, vaguely brain inspired computational mechanism could actually do some of these operations. And I think that that’s the sense of wonder that a lot of people get when they look at the recent products of like, uh, Dolly too, for example, you know, if you’ve been on Twitter anywhere, you can’t get away from this stuff where you just give it a simple prompt, and there’s no way it saw anything exactly like that prompt and it’s training set, and it can paint you this beautiful picture of what that would look like.  

Cameron    00:14:37    You know, and it’s, it’s the same sort of, uh, fantastical, uh, imagery that Hume was talking about in the treatise. And the difference is, you know, we can screw, we now have a computation computational mechanism that we can scrutinize and try to understand how it works. We’re still not completely there yet. And that’s one of the places where I think, you know, philosophy can then in turn, uh, provide some guidance back to, uh, machine learning and helping figure out what are the right types of questions and what are the, what are the kinds of understanding that we want to have about how Dolly too, for example, is able to fuse, you know, a relatively scan, text prompt using its latent space representations, and then craft a such beautiful output that seems so immediately, uh, coherent and plausible to us. How is it able to do that?  

Cameron    00:15:28    And what kind of understanding do we want to have of systems like that? Cause they’re vastly complicated, you know, billions and billions of parameters and huge training sets. Uh, that’s one place where I hope that philosophy can then come back and suggest some ideas about what questions we should ask of these systems and what types of understanding we should hope to have of them and, and how it all fits together in a more coherent picture of how the rational mind might work. You know, instead of just trying to solve some little marketable, uh, trick, uh, you know, one particular task for, uh, the next benchmark or, uh, publication in the next machine learning conference, you know, what, what does this really add up to, uh, together with everything else that, uh, we’ve learned recently in machine learning, uh, about how human minds work or animal minds work, you know, how, how does it add up to a bigger picture? I think that’s a question that often, um, machine learning engineers, don’t, don’t get enough time to ask, um, themselves  

Paul    00:16:28    Is Dolly too, going to be the artist for your cover art on your book when it comes out.  

Cameron    00:16:33    Yeah. I mean, we’ll see there, we’ve gotta figure out the copyright issues for, uh, for the, the results produced by Dolly too. But yeah, I mean, there’s some open access ones I’ve already, I, I had long ago had one that I made myself and some of the earlier versions of, uh, taming transformers or whatever that I really liked. And I’ve had some people make some mockup art for me in Dolly too. <laugh> that what remains is, is between, uh, negotiation between me, the publishers, and to, and opening on  

Paul    00:17:01    One of the things that you, um, that I, I guess we’re gonna talk a lot about that you have pointed to, is this longstanding and continued dichotomy between empiricism and rational. Right. And we’ll talk about how you, um, sort of dispel of that dichotomy or, uh, pick a winner, I suppose, but could you just, um, for people who aren’t familiar, uh, can you just explain the dichotomy between empiricism and nationalism traditionally, and then, you know, how it continues to be a, a subject of much debate these days?  

Cameron    00:17:36    Yeah. I mean, as, as you frame the question, like, I’m sure you know, that the it’s a, it’s very difficult even to define those two terms and, and outline the scope of the debate. And it’s in large part because there’s been, you know, know at least four or five major incarnations of this debate over history, you know, you could, you could index it to Plato versus Aristotle. You could index it to lock and Hume versus Decar liveness. You could index it to, you know, William James versus some of the early introspective introspection psychologists. You could index it to the Vienna circle versus the rest of philosophy. You could index it to, you know, rationalists in, uh, epistemology theory of knowledge today. So you have to pick what you think is the, the most important underlying through the way I typically start is it’s a debate over the origins of human knowledge where you, you know, the first blush, as you say, the rationalists want to kind of unpack our innate mental endowment and the empiricists think that most knowledge comes from sort of interpreting decipher of experience.  

Cameron    00:18:43    Right? So the question is where is, where is kind of like the structure from which human knowledge is built to be found? Is it, is it somehow latent within our minds or is it, uh, out there in the world, uh, for us to kind of uncover, you know, that works pretty well, but then of course you can, you can interpret that in the, the half dozen different, more specific debates that, that come to, uh, quite different, uh, things when, when the rubber meets the road, uh, in the particular incarnation that we have in AI today, I think you wouldn’t do too badly by mapping rationalist nativism to, uh, sort of classical AI or at least hybrid AI where you think you need at least some rules and symbols to be manually encoded from the beginning, uh, before, you know, genuinely human, like learning can take place.  

Cameron    00:19:39    Whereas the empiricist side wants to either, you know, minimize or completely do away with any manual, uh, pre specification of knowledge structures, uh, before learning begins. So the way I try to, to transmute it into a useful debate today is to say that the empiricist is trying to derive all that domain specific abstractions. So that could be things like, you know, shapes or numbers or causal relations or, uh, biological categories or kinship, or, you know, whatever, all the things that some people think might be innate, uh, rationalist think might be innate in the mind. And the rationalist thinks, no, you need to build at least some of that structure in manually from the beginning, uh, maybe like an intuitive theory of causality or an intuitive theory of, uh, what objects are, or maybe even some beefier like evolutionary, uh, psychology inspired stuff like kin detection or facial recognition or so on. You think you need some sort of manual prewiring of knowledge structures. This  

Paul    00:20:44    Is like the, the core knowledge, uh, right. Domains of list.  

Cameron    00:20:48    Yeah. So, so core knowledge, uh, Carrie and spy is sort of like the most popular, uh, and it’s really, that’s kind of a really minimal, uh, rationalist nativism in historical context, you know, so if you look at, you know, Jerry Foer in philosophy and cog site 30 years ago, or 40 years ago, where you look at, uh, the middle incarnations of Chomsky and linguistics, you know, you’re, you know, photo notoriously says like every single simple concept in the mind is, is innate. Okay. He means something kind of specific by that, or Chomsky thinks, you know, you have, uh, hundreds of rules and parameters that are innately specified in the universal grammar that then get sort of tuned to your particular language. Uh, core knowledge wants just, you know, a few pretty general concepts, uh, to be innately pre-specified like object and agent and, uh, number.  

Cameron    00:21:40    So I, you know, the way I interpret the history, sort of like the empiricists have already mostly won the field that we’ve negotiated the nativists down from sort of platonic or pian, uh, multitudes to just a handful. And I, I still think handful, I still think that, uh, that debate is really worth having an empirically very fascinating to arbitrate and really worth getting into, uh, the details of the experiments that are done in, uh, developmental psychology are also, um, you know, the, the way that I recommend machine learning researchers kind of enter into the debate is it’s, it’s sort of like a contest where we can transmute this philosophical question that goes all the way back to Plato and Aristotle into kind of an empirical contest, right? Where you just build some systems, according to the rationalist principles, maybe hybrid systems of the sort that like Gary Marcus recommends, and you build some systems according to empiricist principles, uh, of the sort that, you know, like Benji or Lacoon recommending you just see which ones are more successful or human.  

Cameron    00:22:37    Right. But, and this is like the fundamental reason I wanted to write this book. I think that the, the rationalists are, uh, misinterpreting what the empiricist should be allowed to, uh, build into their systems from the beginning. And I think that, you know, to understand how this, uh, mischaracterization started, you know, I think you have to go back to the beginning of cognitive science, um, which has been an interdisciplinary field with a bit of identity crisis, you know, since the beginning. So anytime you, you try to bring, you know, uh, computer science and philosophy and linguistics and biology and all these different fields together, uh, you’re always gonna face this question, like, what are we doing such that we should all be talking to each other and trying to, uh, be engaged in this common intellectual endeavor all the time. And you always get this bedtime story about, uh, Chomsky’s, uh, review of Skinner’s verbal behavior, right.  

Cameron    00:23:34    And how, uh, the reason we needed the cognitive revolution was that, uh, behaviorism, uh, which was, you know, empiricism and association and all the stuff that, you know, at least by label I champion now, uh, was so terribly wrong and limiting. And so on that we needed to, uh, just completely break with it and have a paradigm shift, uh, a way to a new way of thinking. And then the rationalist today, like Gary Marcus, you know, he does this and Steven Pinker does this and Jerry Foer loves doing this. And, you know, all the, all the critics of deep learning almost love to, uh, then paint, uh, deep learning as being the contemporary reincarnation of psychological behaviorism. And this is just totally wrong <laugh>. And so what I really try to do in the book is articulate exactly why that’s not the, the, the way to set up this contest.  

Cameron    00:24:30    I think it’s abso this, this contest between the I empiricist spots and the rational spots is exactly the contest we should be having today, but it is absolutely not the reincarnation of, you know, BF Skinner or, uh, Watson, uh, versus no Chomsky in the sixties. Um, you know, and, and what they wanna say is it’s, it’s just, you know, statistics pattern matching simple association on sort of computational steroids, right? It’s just a couple simple principles of association like classical and opera conditioning, and then you juice it up with huge data sets and huge, massive amounts of computation. And that’s all deep learning is. And what I try to do in the book is go through, uh, dozens and dozens or hundreds of models and show how they all do something substantially more than that. And in fact, what they do is exactly the kinds of things that what lock or Hume, or, uh, William James, or IC.  

Cameron    00:25:28    I talk about Sophie to grouchy Adam Smith, all these other, I empiricists had these wonderful ideas where they had similar Lipes, like the one that Hume had with the imagination. And you can see how the structure that’s built into these deep learning models is really quite similar to the more ambitious ideas that they had that go beyond simple opera and classical conditioning that was in the behaviorist toolkit. So the question, you know, then becomes, okay, so what are the constraints on what the empiricists can appeal to? Um, and I think you, you know, there, there have been some rationalists nativist philosophers in particular Lawrence and Margolis. Who’ve written some nice papers about this, who, who similarly complained from the nativist side about setting up the terms of the debate in this way, because there just isn’t anybody who’s an empiricist and maybe there never was. Right.  

Cameron    00:26:18    Um, if the only innate mechanisms you’re allowed to appeal to in your attempt to explain how the mind works are opera and classical conditioning, then, you know, not even the classical behaviors believe that cuz they even, they needed, you know, uh, attention and they needed basic drives and they needed all kinds of other innate stuff that was relevant to get, uh, behaviorist learning off the ground, um, lock and Hume and the others and deep learning today also appealed to, uh, a variety of other domain general procedures, that eye canvas under the term of faculties, like memory and attention and imagination. And uh, there’s just no way to make sense of what they were trying to do without granting them, these appeals to the faculties that they make and their works.  

Paul    00:27:10    What is a, a faculty though? What like, um, cause I’m grappling with this term, whether a faculty is a potential or it is the, you know, because you could set up a system so that it has some inductive bias, would you call that a faculty or you know, like what, how big is the umbrella of faculty? Sorry to interrupt.  

Cameron    00:27:29    Yeah, no, no, it’s a, it’s a good question. It’s a question on which a lot turns and which has, uh, been arbitrated differently, let’s say throughout the century. So the let’s at least the interpreters of Aristo. So Aristotle was a faculty psychologist, right? He had the same standard set of faculties that we do, you know, perception, attention, memory, imagination, and so on. So this is, this is something that’s very deep in human thought. Um, and the scholastics that we’re interpreting Aristotle, at least they had a kind of disposition or what we Mount now call, uh, functional way of interpreting the factor faculties where they give it a kind of, they define it in ti in terms of like a dis dispositional power to do a certain type of thing. Right. Uh, and then anytime you wanted to explain, you know, how is it that I form some mental imagery, you would just say, oh, well you have a faculty that has the power to do that, right?  

Cameron    00:28:22    This is, this is consistent with the kind of form of functional explanation that, uh, you know, got, got made fun of a lot in early modern philosophy where you like, explain why, why is it that opium makes you sleepy because of its normative properties, right? This was just another example of that. And Locke makes almost that same argument against the Scholastic approach to the faculties. Um, in his work Hume, I think takes a different approach. I think Loch does too, but Hume makes it even more explicit. And you know, some of the secondary literature that, uh, I was mentioning earlier that would be really useful for people to dig into here is by a guy named Thomas deme, Thomas meter. Um, who’s written a couple of articles, rebutting voters’ interpretation of Hume in particular on the faculties, like the faculty of the imagination. And he suggests that, um, the way that Hume thinks about the faculties is not as some kind of disposition definition where you just have this thing that has the power to do this.  

Cameron    00:29:25    He treats it rather like a kind of empirical hypothesis where you want define like sort of cluster of, uh, effects. You know, so you say, look, the human mind can do this and it can do this and it can do this, these three or four things. They all seem kind of related to one another. So these are things like fusing together, different concepts to make a novel concept like a unicorn, you know, is a horse with a horn coming outta his head or the ability to form mental imagery or the ability to gauge the relative probability of two events by forming, uh, images in your head about how those two things would, those two scenarios would play out. You might say these things all sort of seem to have something in common, one with one another. Um, let’s call the thing that does that, the imagination.  

Cameron    00:30:12    Now I’m gonna be absolutely explicit that I haven’t explained how the imagination does that, but we do have something that can do those things and they seem to roughly co here together. And then I’ll try to characterize what that faculty is like. And Hume says some things like it seeks novelty, and it, it seems to want to like complete pictures. You know, he has this a variety of great kind of sketches around the edges of what the imagination is like, but he never defines it. And he never says he’s explaining how it works. That’s where I think, you know, it’s really interesting to plug in some of these more recent neural network architectures that can actually do some of these jobs, uh, like generative adversarial networks or generative transformers and say, you know, look, here’s a physical system that does not have any innate knowledge built into it, of the sort that the rational said you would need to have.  

Cameron    00:31:03    And it looks like it can do some of the jobs that Hume says the imagination could do. Now I don’t go so far as to say like, okay, so now we’ve created a system that has a real imagination, right? I approach it more from a standard kind of modeling perspective in philosophy of science to say, no, of course this is a partial model of the imagination. Uh, but maybe it’s modeling some critical, like difference making aspects of the mind or brain that allow us to do these things in our head. And in that sense, it can teach us, uh, some of the key bits that, that Hume, you know, based on the neuroscience or mathematics of his day, he couldn’t possibly explain. It can fill in some of those critical ellipses that then strengthens the whole empirics package when they try to explain rational cognition.  

Cameron    00:31:46    So, you know, I, I prefer to, you know, one of the ways that Tama meter puts it is that, um, it’s, it’s Hume is often interpreted as, as wanting to be like the Newton of the mind. In other words, trying to do a kind of physics of the mind where you have these ideas that are bouncing off against one another, like billiard balls, um, and kind of affecting one another like physical particles. And he says, no, that’s not quite right. In fact, Hume makes, uh, a number of appeals in the treatise and inquiry into other sciences, uh, more biological sciences, like anatomy or more chemical sciences, where the fusion of two things can be more than the sum of its parts and Humes better interpreted, you know, when talking about the faculties as doing the same type of thing that like an anatomist does when they try to theorize about, you know, how a liver works or how a heart works, where you, you, you ha you need to have something that does this job and you kind of sketch its position in a kind of architecture of other or mental organs. And then you can start trying to theorize about how it actually works and from a kind of mechanistic perspective, from a variety of different perspectives using behavioral work, maybe using modeling work, maybe in, in integrating some neuroscientific findings into the story. Um, and so I think that’s the better way to, to approach the faculties in these cases.  

Paul    00:33:11    Uh, you used, uh, as it faculty psychology, I think is the phrase. Yeah. And, you know, some something like imagination, um, you know, there are, there are people these days who think that our folks, psychological concepts are, uh, incorrect or outdated, or we need new terms because they don’t map onto, you know, the quote unquote mechanisms, uh, in the brain. And that, um, so, so when you use a term like imagination, is that in the psychological domain, and, and then we can look at mechanisms and keep them separate, or is this a way to fuse the Cy psychology with, you know, a more, uh, with the biological or computer, uh, mechanistic, uh, implementation level mm-hmm <affirmative>, uh, accounts right. Of, of, of how these things play out. Right. In other words, you know, are, are we fine just using a separate language for the mental, psychological constructs that are these faculties mm-hmm <affirmative>, uh, or, or can we fuse them, you know, is there, can we bridge that divide? Yeah. With this approach,  

Cameron    00:34:14    I, I tend to be a mechanist in these types of disputes, but then some of the more subtle philosophy of modeling of the last 10 years has been about abstract mechanistic explanations. You know, you’ve had Catherine Stinson on that’s, that’s one of the places where I draw on her work. Uh, also Tini and, and a few others have been writing recently about how, uh, mechanisms, you know, mechanistic explanations for the brain don’t have to be pitched at the most specific grain of detail. And they can be quite general. And faculties, you know, are, are almost as general as they get. A lot of my previous work was actually casted an even higher level of abstraction at about cognition itself, you know? And so I wrote about what kind of mechanism could implement cognition. The faculties are a little bit more specific than that, but it’s not like, you know, the imagination is gonna map to a particular brain region.  

Cameron    00:35:06    That’s thinking, uh, much too simplistically about the relationship between, uh, computational models and how the brain works. I do think that we should try to draw as close a relationship as we can between the architecture and other structural details of these neural network models and the brain’s operations. Uh, and you know, all of the things being equal, uh, a model that has a tighter mechanistic structural correspondence is to be preferred, but those tighter correspondences are often traded off against other things. So, you know, for example, this is a problem that, you know, FMRI people forget about the computational models, but my brain differs from your brain, right? So, you know, if I’m trying to interpret FMRI, there’s gonna be some slight differences even between where certain, uh, functional capacities are localized in the correlates between my brain and your brain. So, you know, even just thinking about humans, not even going to other species or to artificial, uh, agents, you can’t go too specific.  

Cameron    00:36:15    I do think that there’s a nice story that can be told at this point about the deep convolutional neural networks. And I try to do that in my sys article to say that these might actually be, uh, you know, we might have actually located, let’s put it the generic mechanism and Stinson’s terms if anybody wants to go back and listen to that, um, podcast too, uh, that is shared between a deep convolutional neural network. That’s instantiated in a computer and, uh, the vental stream processing and the primate brain. Uh, so this is a story that like James DeCarlo, uh, and his lab have told in a number of different publications where the idea is that, uh, you know, going all the way back to Hubel and diesel, you have this alternation in the ventral stream between what they called simple cells and complex cells, right?  

Cameron    00:37:09    Where they say the simple cells are responding to a particular feature in a particular orientation. And then the complex cells take, uh, input from multiple simple cells that are detecting the same feature, but in slightly different orientations or maybe in a slightly different location. Uh, and they fire just if one of their inputs fires above a certain threshold, let’s see, right? So the simple cells have what, you know, we call an, a linear activation pattern and the complex cells have what we call a non-linear activation pattern. And then the theory is that there’s lots of, kind of layering of these simple and complex cells, uh, throughout the ventral stream. And this is in fact, the, the neuroscientific hypothesis that inspired some of the very first deep convolutional neural networks all the way back in Fukushima in the eighties, right. He was directly inspired by this neuroscientific work, um, Neo cognitron, right, Neo cognitron, which, you know, didn’t get as much attention maybe as it should have back in the day, because we didn’t really know how to train such networks effectively, but right, the, the basic insights of the deep convolutional neural network were, were largely there.  

Cameron    00:38:16    Um, and the thought is, so what you’re looking for is a kind of an abstract kind of mechanism that could be instantiated in very different types of substrates, but that captures the sort of key difference makers that allow a cognitive system to solve a certain type of cognitive problem, like object recognition, for example, and the way the story goes right, is that by, uh, passing the input from many of these linear feature detectors to a non-linear aggregator, and then stacking lots of those sandwiches of linear and non-linear, um, detectors on top of one another, in a deep hierarchy, you sort of iteratively make the detection of objects more and more resistant to what the machine learning researchers, envision researchers call nuisance variation, right? Which is what makes object detecting hard, and really is what made all the classical good old fashioned AI computer vision models fail because you, you just, there’s just too much nuisance variation and input that you can’t explicitly anticipate it all and program for it all using manual rules and symbols.  

Cameron    00:39:34    So the, the computational benefits that you get from one of these deep linear non-linear stacked hierarchies, and the ability to train them gradually over time is what allows the thought goes, both the brain and deep convolutional neural networks like Alex net and later inheritors to solve these, uh, visual recognition problems so much more effectively than all previous methods. And the, to me, that’s a mechanistic explanation. It’s just a very abstract one, right? And it’s one, that’s at a level of abstraction that could be shared between humans and monkeys and artificial neural networks in a variety. You built in a variety of different substrates. And, and that’s the sweet step, right? That’s, that’s the sweet spot that you want between making specific empirical predictions, but also being simple and abstract enough that you feel like you’re really understanding some kinds of deep principles about how this works,  

Paul    00:40:29    But in the, in the case of, um, Jim DeCarlo and Dan Yemens and now many others work on using CNNs to model the visual vent visual stream, you know, they, they sort of limited the model to try to map it on as well as they could to the various hierarchical levels in the brain. And, and now there’s like, I think it’s called brain score where you can see how much variance your model, um, explains with respect to neural activity that’s recorded. But then of course there, uh, the size of convolution neural networks has just grown and grown and grown mm-hmm <affirmative> mm-hmm <affirmative> to where it’s, you know, not, not that it really resembles brains ever <laugh>, but so it’s less, so it less resembles brains mm-hmm <affirmative>. Um, and thinking about like, uh, I don’t know if we’ve used them, the word, the phrase domain general faculty yet, but that’s what you were alluding to earlier is these domain general faculties that kind of fit, right. Um, these various things in them, you know, when you have a giant convolutional neural network, let’s say, I guess what I want to ask is, you know, would that still just be the same faculty at a larger scale because it’s doing something super human or would you potentially be building new faculties as you scale up, for instance, does that make sense?  

Cameron    00:41:45    Yeah. Gosh, I mean, that’s a really interesting question. I haven’t thought about entirely new faculties yet. Um, I’m, I’m already sort of committed to the idea that, um, large enough differences in sort of computational scale or power can add up to qualitatively different psychological functioning, cuz I think that’s really part of the case that I make about like what’s different between, uh, contemporary deep convolutional neural networks and the simple three layer networks that you had back in the eighties and nineties is, you know, this is a result that’s been proven a number of ways in the computer science as you get exponential benefits in, uh, computational power by having a much deeper network because you can recursively reuse some computation that’s performed by an earlier node, uh, relative to the depth of the network. Uh, so you can now solve some, you know, you can say like, well in principle, you know, a three layer neural network could approximate any function.  

Cameron    00:42:42    Yeah, sure. If you had like, you know, computation for billions of years and you had the number of processors that there are atoms in the universe, it’s not really worth talking about, but now we actually have physical mechanisms that can actually do these problems. Okay. Maybe a lot of ’em require data sets that might be a bit larger than the ones that say a human child’s been exposed to. That’s actually a point that I, you know, think is worth a lot of, uh, interpretive work as well. Um, but they can do it, you know, in a practical time period in a way that, uh, some of the earlier networks can, and that’s a direct result of them having this kind of generic, um, mechanism that, that we were talking about just a second ago. Um, I you’re right. That the one that say DeCarlo’s lab built was kind of constrained to be shallower and maybe more biologically plausible in its size than a lot of the ones that people are using for state of the art applications today.  

Cameron    00:43:36    Uh, and I think there are very interesting and deep philosophical questions. This is one of the questions I raise in my nature machine intelligence paper about the problems of induction is, is more directed at the possibility that these systems are discovering real features that are kind of like be necessarily beyond human can, right. That, you know, so I, I think this is kind of vividly raised by, uh, alpha fold and alpha fold too. Right, right. Where you say there are these microbiologists who’ve devoted their lives to predicting protein folds. And then, you know, and it’s first shot on the task. Alpha fold just blows them out of the water. How does it do that? And you can talk about, you know, the Leventhal paradox and the complexity of an amino acid chain and how many different possible, uh, degrees of freedom there are for it to fold and say like, there’s just no way that it could, could see some features that are letting it make these.  

Cameron    00:44:34    But what if it can’t, <laugh> like, you know, what if, what if that system looks at this very complex and, and just because it’s so much larger, uh, in its, in its deep hierarchy, let’s say than brains plausibly could be that it, it sees some feature there. And that feature has, like, let’s say all of the nice properties of being robust, uh, to differences in background conditions and, and being maybe even causally manipulable to, to bring about certain act, suppose it has, you know, all the special properties or whatever you want of a, a real scientific property. When we say in that case, you know, is, are there real properties out there that are necessarily beyond human intelligence and is like the front, the frontier of science now gonna be defined by us, relying on, on machines to see properties that we sort of like necessarily can’t understand, you know, that we, we rely on, you know, like the web telescope now to see things very distant way, but we can understand the properties that we’re seeing, but what these might be properties that are just like, necessarily beyond our, our comprehension.  

Cameron    00:45:39    What, what, what should we do about that? Um, as, as scientists or philosophers of science, um, entirely do faculties, I don’t know. I don’t, I don’t even know how we would grapple with entirely do faculties. New, new, new properties are bad enough from, from my perspective. Uh, but, but it could be, you know, that they just start, uh, outthinking us in some way that we, we can’t even figure out the way in which they’re out thinking us. And, and then we might really be in trouble. <laugh>, you’re gonna turn me into a singularity theorist. I’m not a singularity theorist, but you’re gonna turn me into world.  

Paul    00:46:11    Well, right. But I mean, so a lot of what you focus on, you know, thinking about these domain general faculties is, you know, one of the questions I was gonna ask you is, well, how many do we need, right. Yeah. Is it a handful? Yeah. Is it a bunch? And, but, but that is geared very much toward human intelligence. Yeah. Yeah. And we all know human intelligence is the highest intelligence ever in the universe, but,  

Cameron    00:46:32    Well, that’s, that’s, that’s actually, I, I, the way I like to look at it for, for both humans, and this is the way I kind of present it in the book, the way I like to look at it for both humans and machines is that the faculties are solutions to computational problems. Right. So, um, nuisance variation is a faculty. That’s a solution to that problem. You talk then, you know, what does perception let you do? And then you do this thing. I was talking about thinking about the edge of a mental organ, like what is, what is its role in the cognitive economy? And then you start to theorize about the internal structure of that faculty. How does it do the job that it does? Um, imagination is a solution to a computational problem, right? You train up a deep convolutional neural network, but it has a hard time generalizing to add a distribution data, right?  

Cameron    00:47:20    This is a just standard problem that all machine learning researchers worry about. You say like, that’s great, but how is it gonna create, you know, novel representations or deal with novel situations? You say, well, what if you had a generative system that could recombine its previous experience in flexible ways to think about, uh, you know, predict what would happen in different types of situations or what different types of combinations that weren’t explicitly in its training set might look like that’s a solution to a computational problem. Memory is one of the best examples, right? There’s a very classic paper by McClellan rumble, heart O’Reilly right. About, um, or is it McClellan MCNA O’ re I don’t wanna get the, uh, MC citation, I believe. Yeah. The 95 paper anyway, complementary learning systems. I just wanna say the phrase catastrophic interference or catastrophic forgetting it sometimes gets called, right?  

Cameron    00:48:08    But there’s this problem with all neural networks that they have a tendency when you train them on a new problem, they tend to overwrite their previously learned adaptive solutions to other problems, especially if there’s some kind of thematic overlap between the two subject materials, that’s a fundamental computational problem with any system that learns the way that brains or, or artificial neural networks do. Right. And memory is proposed as a solution to that problem by, by having different memory systems. One that has a faster learning rate and one that has a slower learning rate and doing kind of slow inner leave memory consolidation over longer periods of time. You know, so that’s the way I look at all of the faculties in the book. You know, I could tell the same story about attention or about empathy and social cognition. They’re all solutions to computational problems that are gonna be faced by both humans and by robots.  

Cameron    00:49:02    And if you look at it that way, it’s really not. It’s, you know, it’s the same story. Um, and it, it, from my perspective, it really provides another form of confirmatory evidence that neural network type approaches to the mind are the right way to think about things, right. When you find that the brain has developed some kind of faculty or system, whatever you wanna call it, that solves this computational problem, that our, uh, neural artificial neural networks that lack this system, uh, struggle with that suggests that we’re on to me, we’re, we’re on not only on the right track by modeling the minds operations with artificial neural networks, but that we’re also on the right track by trying to add more faculties and faculty like processing to our deep neural network architectures. And I think this is really something that’s already well underway. And I think it’s a, it’s a perspective that’s certainly, uh, very friendly and familiar to the way that deep mind works.  

Cameron    00:50:00    Um, and a lot of other researchers that have come from psychology and neuroscience that are now in deep learning, I think share this type of perspective. So I, I, you know, I think one of the things I do in the book, you know, so you, you start from this behaviorist caricature, it’s all just classical and opera conditioning on computational steroids. And you can show ’em no look, here’s like five models from deep mind, or, or, you know, some other modelers where they explicitly say we were inspired to add, add this thing to our architecture, by the way that memory works in, um, mammalian brains to solve this specific computational problem, or, you know, you can look at dozens of models that say, you know, look, we’re adding an attentional mechanism to solve this problem. You know, the, the fundamental, uh, innovation of transformers being, uh, particular type of attention, we’re adding this sort of to solve this very particular computational problem in a way that’s, you know, vaguely inspired by the role that this faculty plays and this type of processing that human brings. And so that’s that, that, that’s why I use the faculties as a kind of narrative thread to, uh, try to, uh, raise familiarity and awareness with a much wider range of neural network architectures than typically get invoked in these types of flashpoint debates.  

Paul    00:51:16    But you have, you don’t have an exhaustive list of our, of the needed domain, general faculty, but one, one of the things that you appeal to is, uh, in the days of old, which doesn’t happen much anymore. And you kind of put a call out that this is what AI researchers should be focusing more on, and some are, are cognitive architectures. Yeah. People like, you know, Chris, Eli Smith, Randy, O’Reilly, they’re trying to, you know, build these right things. And, and you, you think at the time is right now to start taking these modules and putting them together right. And figuring out how they can work together to do more  

Cameron    00:51:51    Right.  

Paul    00:51:52    Intelligent things.  

Cameron    00:51:53    Right. Exactly. Yeah. Most of the, so there, I, I say in the book, there’s been a lot of faculty models, but they’ve mostly been one offs. Right. So they, right. They just say like, we’re adding a memory story to this model and look at what cool things that can do, or we’re adding some imagination, like component look at the cool stuff. Can do, we’re adding an attentional mechanism, look at the cool stuff we do, but you don’t yet see somebody trying to release a full, deep learning, cognitive architecture, the way that you’ve seen, as you mentioned with, uh, older, uh, previous incarnations of, um, cognitive architecture. Like, yeah, those are the examples I talked about before. Um, and I think the time is definitely right for that, but this is another place where I think it would be really good to look at the history of empiricist philosophy.  

Cameron    00:52:39    Uh, and this is not, you know, a novel idea I came up with, but the thought is that as they try to combine more and more semi independent modules in a coherent, deep learning, cognitive architecture, they’re gonna face more control problems and coordination problems, right. Where, you know, the memory module and the imagination module might conflict, or the attention module might, uh, have multiple things sort of vying for its, uh, its its processing. And, and these are problems that I think it would be best to be proactive about. Um, at this point where we’re just now starting to build, you know, fully deep learning, cognitive architecture, that’s not sort of hybrid, but really like deep learning through and through are how, how are we gonna solve these control and coordination problems before they, they really become unmanageable? And this is this type of problem is something that all of the empiricist philosophers worried about sort of exhaustively in their famous works is they worried endlessly about confusing the deliverances of perception and imagination and memory.  

Cameron    00:53:41    And they had kind of detailed almost processing level stories about how they thought you should solve that problem in terms of like ency and vibrancy, different, different, um, properties of the mental representation that they don’t give you an algorithm. Exactly. But you know, they give you really rich ideas. Um, you know, I, I was mentioning, I was just in Scotland the, the way I think that I’d like, um, these, the neural network modelers to read the philosophy is sort of the way that like a Scottish landscape pointer would go to, you know, the Highlands to sort of take in the scenery and, and sort of get inspired. Right? You get the big picture. It’s not gonna tell you how to do the code in the same way that like looking at the beautiful view, doesn’t tell you how to paint the picture, but it’s sort of like, it’s, it gives you some ideas about what to try to paint and, and why it’s valuable to try to paint that thing. Right. And I, and I’m, I’m hoping that reading, you know, what, you know, humor lock had to say about these control problems, where the faculties might clash with one another, would provide that kind of inspiration to develop the algorithms that might actually solve the problems and computational mechanisms.  

Paul    00:54:50    Do you think that that’s gonna be one of the harder, do you think that’s gonna be harder than what’s being done in deep learning these days? Like the, the control and coordination between modules, you know, there are even, you know, trade offs between model based and model free reinforcement learning in our brain. Right. Right. And there’s work done to see which one takes over and are they in competition? Are they complimentary? And, and that’s just within a very narrow domain. Right. So then you have these other, somewhat distinct domain general faculties that then need to coordinate, uh, it seems like a different problem, but I don’t know that it is then, you know, learning weights.  

Cameron    00:55:27    Yeah. I mean, so there’s, that’s, that’s certainly one of the ways that I think it’s definitely worth approaching the solution to the problem is to think of it as just another kind of meta reinforcement learning problem or whatever, where, uh, you have, you know, this is the way that like Matthew Boman likes to think about it, for example, is you’ve, you’ve just got sort of another reinforcement learning system. That’s doing something like executive control and you, you just train it to kind of select between different policies and the different sub networks. Uh, and you don’t have, uh, you know, at least I haven’t seen a model quite yet where they’re, they’re working that type of approach to, you know, arbitrate between like memory storage and visual imagery from an imagination module or generating internal monologue from a verbal transformer. But that’s, that’s like the kind of work I would like to see. Try. And maybe it, maybe it’s just another reinforcement learning problem. I don’t know. Um, but maybe it’s not right. Maybe you, maybe you need to think about some other dimensions of the, the problems that are, you know, described by some of the empiricist philosophers.  

Paul    00:56:28    I’m sorry to ask you very specific question. Cause I know you don’t have a, an exhaustive list of domain, uh, general faculties, but, and you mentioned empathy. Um, and that’s, you know, that we think of that as like a really higher cognitive capacity or faculty and, and social, um, is language it’s its own domain general faculty. Where does, where does language fit or is it part of a social, I think, do you have a, yeah, go ahead.  

Cameron    00:56:55    Just for narrative structure in the book, I treat it as, as part of attention, um, is  

Paul    00:57:00    Being well transformers.  

Cameron    00:57:03    Exactly right. Um, yeah, the thought being, you know, and that’s based a little bit also on my, on a lot of my background reading into, you know, people who treat language like a faculty, how is it that they think that it works? You know, what is it the let’s say, approach it from the perspective what’s the uniquely human, uh, capacity for language that’s not shared with, you know, like Nim Chimpsky or, or some of the animals that they’ve tried to teach language to. And the thought is, well only we can learn grammatical, uh, structures in terms of like recursive tree building, syntactic structures, uh, somewhere in a particular place in the Chompsky and hierarchy. Right. Um, and a lot of people think like sort of maybe what we have in, you know, our left temporal areas or whatever that get associated with grammatical processing is a kind of, uh, a pointer system that lets us kind of like store locations in a grammatical structure as the structure gets kind of elaborated to be more and more complex so that we can process particular bits of that structure, uh, recursively in the right order.  

Cameron    00:58:07    You can understand that as a kind of attentional mechanism, right. Maybe combined with a kind of memory mechanism, right. Where the attentional mechanism is pointing to where you need to look in the hierarchy and then, um, helping you. So it could just be a particularly sophisticated form of attentional mechanism. I don’t think the debates about how exactly to count faculties are really quite as important as saying there need to be, you know, what what’s the real debate to be had between the rationalist native is today and the empiricist is whether the empiricist should get to appeal to a bunch of other domain general innate structure that does more than just a couple simple learning rules. And the answer to that is obviously yes, the rest of the debates about how many faculties there should be and, and which faculties, uh, there are, and which ones are actually distinct from other ones is I think a, a family dispute that’s totally worth, you know, empirically arbitrating, you know, and another really interesting example of where that dispute comes up is whether, uh, memory and imagination are distinct.  

Cameron    00:59:08    And that debate actually has a very long philosophical pedigree, too, you know, going all the way back to Hobbs and Hume and a number of other philosophers who weighed in as to whether, uh, imagination and memory are, or not distinct and Felipe de regard. And, uh, a number of other philosophers, uh, recently have, have weighed in on that. Uh, you know, the thought being that because we find that memory recall seems to be a little bit more creative and reconstructive than the kind of photocopy, uh, replay picture of memory suggested, uh, you might think, well, uh, memory’s just a particular mode of the imagination or something like that. Uh, you know, I, I have kind of a particular take on that where I think it’s actually worth preserving the ends of the continuum, but then there might be some interesting middle positions, you know? So, uh, the psychologist nicer talked about this category of what he called episodic memory, which is, uh, like, you know, I don’t know if your family celebrates Thanksgiving, uh, but you might have a memory of like a typical Thanksgiving dinner, you know, or some other holiday, whatever holiday your family celebrates, right.  

Cameron    01:00:17    Where it’s not like it was a particular event, the way that episodic memory is supposed to be, but it’s kind of like slightly abstracted. It might be an amalgamation of like 10 Thanksgiving dinners, you know, where, you know, somebody’s got the game on and somebody’s complaining about the Turkey and, you know, these certain things happen kind of stereotypically and somewhere in between, you know, a kind of creatively S fully abstracted event and a particular record of a particular occasion. And I think there’s gonna be lots of examples that are kind of like blends, you know, a mental representation’s kind of a blend or cooperation between diff two different faculties. But that doesn’t mean that like my memory of what I had for breakfast this morning, or what I did five minutes ago is like a purely imaginative act. And there’s no difference between, you know, me recalling what I had for breakfast and, uh, me imagining what it would be like to, you know, write a unicorn on the rooftop over there. Those are, those are still pretty distinct modes of, uh, my mind. And I think that’s how a lot of these, uh, disputes about exactly how to count the faculties and, uh, exactly what the borders of the faculties are gonna be should, should work is where you have sort of like particular models of how each individual faculty, when it, when it’s kind of like in its purest mode is operating. And then you can understand a lot of the number cases as being, uh, kind of halfway in between or involving cooperation or coordination between the faculties.  

Paul    01:01:42    We, we do celebrate Thanksgiving, but, um, it’s the, exactly the same every year we sit around, we’re all, uh, just talking about how much we love each other and just super thankful the whole time and dinner always comes out perfectly. Uh,  

Cameron    01:01:54    Nobody complains about the Turkey.  

Paul    01:01:56    No one complains about the Turkey, but so, so, you know, memory, imagination, dichotomy, right. Um, in empiricism rational dichotomy, I, I wanted to ask you this and I’m a, I almost skipped over it is just how to go beyond dichotomous thinking. I mean, I think that we, maybe one of our domain general faculties is dichotomous thinking, right? Yeah.  

Cameron    01:02:16    For sure.  

Paul    01:02:17    But to put, to, you know, have binary, uh, a versus B, but it seems that historically, and, and, you know, like what you’re doing with domain general faculties for instance, seems to dissolve some of the, that dichotomy. Yeah. Um, and, and seems fruitful. Is there a methodological approach, uh, that you take to dissolve dichotomies or is, is it just swimming in the waters of each of them and trying to understand what each camp is trying to say and what they’re really saying? And, and so on.  

Cameron    01:02:49    I mean, it’s, it’s, I think just sort of really taking the time to charitably, uh, interpret what each author thinks. Some of these big banner terms imply and kind of mapping them out on a, on a conceptual map. You know, it doesn’t have to be like a single dimension continuum. Sometimes you might need even two or three dimensions, but, you know, I think I have a couple figures like that in my book where I try to say, you know, look, you know, sure these are all rationalists in some sense, but that just means they’re kind of on one side of this long continuum. And, and there are important inter-party differences that are relevant to, uh, the dispute with the, the other far side of the continuum in the sense where you wanna say like core knowledge is much more empiricist than, uh, photo or mid mid-career Chomsky or Plato is in a, in a totally meaningful sense in the, you know, like, uh, Lacoon might be less empiricist than, you know, certain like bitter lesson, hardcore, just throw more computation at it. Um, uh, machine learning, thes, and, and these are, these are important differences that you wanna recognize to avoid the kind of caricaturing and talking past that. Um, I’m trying to urge everybody to stop for us to have kind of the most useful competition that we can have in this, this grand engineering moment that we have.  

Paul    01:04:08    But before we, uh, you know, if you have time, I wanna, uh, get on to talk about forward looking content and Telio semantics, but yeah, before we leave domain general faculties, um, I, I want to ask about, I guess, two things, you know, one, a lot of our brain’s power, computational power, I suppose, if everything is computation is devoted to bodily processes, homeostasis breathing, you know, all these automatic processes that our brain has to, uh, um, keep involved. I mean, is that something that you think a, that should be, you know, do we need to even need to worry about those when we’re building artificial systems? Uh, because they do have to interact mm-hmm <affirmative>, you know, and they do affect our other domain general faculties or other, uh, cognitive modules, et cetera. Um, so I, I guess that’s the first question. And then the second question is just thinking about the spectrum, you know, we’re talking about human level AI and maybe superhuman level AI, like protein folding, right. But then thinking about, you know, your background in animal cognition and thinking about the different kinds of domain general faculties animals non-human organisms might possess is that, that something that we should be paying attention to, or even, you know, can we even discern what those domain general faculties might be? Mm-hmm <affirmative> sorry. Those were two different questions.  

Cameron    01:05:26    Yeah. Let me, let me take the first one before I forget it. And you might have to remind me of the second one. Um, yep. So the first question is, is, should let’s say machine learning researchers worry about, uh, the more let’s say autonomic and, and lower level, uh, processing that the brain does and seems to occupy a lot of, uh, hardware space let’s say, in, in the brain, is that, is that fair enough? Constru  

Paul    01:05:49    Fair, fair enough. Interception, but also, you know yes. Yeah.  

Cameron    01:05:53    And the answer is, yeah, absolutely. Um, and I think looking at the empiricists again, is a great way to draw this, uh, to drive this point home in particular, William James was someone who was, you know, often considered a kind of father of modern psychology, but also very emphatic on this point that, uh, the kind of autonomic and its interceptive, uh, aspects of cognition were just vitally important to understanding how mentality works and how we get around, especially with emotional processing. So, you know, if there was one really weak point, uh, in recent deep learning achievements, I would say it’s in the ability to model emotional responses are what Hume, you know, might have called sentiments. Uh, and these emotional responses and sentiments play an enormous important role in all throughout, uh, you know, at least from early modern to, to today, empiricists theorizing about how the mind works.  

Cameron    01:06:54    It’s, uh, they, they give us, uh, kind of effective appraisals of, of how we should respond to situations and all kinds of important information and one place where this would be really useful to be able to take advantage of in deep learning is in, uh, valuation functions, right? So mm-hmm, <affirmative>, you know, there’s, there’s maybe several dark arts to, uh, deep learning today, but you know, one of the trickiest, dark arts, I think, is building a good valuation function for your reinforcement learning algorithm. Right. And I think all of the modelers will be totally forthright about that, that, uh, you know, when you’re, when you’re dealing with a simple game, like environment, like go, or, you know, like an Atari game, okay, you can just use score or board control or whatever as a proxy for reward. But if you wanna actually model rational agency more generally, uh, you know, it’s, it’s hard, you know, one of the points I make and, and this get ties back to some of the earlier points I made about, you know, anthrop fabulation, which is sort of assuming that the human mind is not vulnerable to some of the same problems that these artificial agents are.  

Cameron    01:08:03    We are terrible at this too, right? Yeah. A lot of the time when we’re thrown, you know, H index or us news and world report scores, or, uh, you know, social media likes, we often use these, these easy quantitative proxies for value that are actually making us miserable and not leading us, um, in the right direction. So again, this is a computational problem that both, uh, both, uh, computational systems and human brains, uh, are faced with. And I think listening to the body, right. Um, in particular as a rich multidimensional source of valuations, uh, is one way that we solve this problem and that we really don’t know yet how to integrate into deep learning. Uh, and there’s lots, you know, there’s lots of people making specific versions of this proposal. Um, you know, Lucy cheek, one of my hosts here at Cambridge, uh, has suggested that in animal cogniti, she’s primarily an animal cognition researcher, um, and, and developmental cognitive researcher is that, um, reinforcement learning in, uh, deep learning today, mostly focuses on a kind of like wanting system that you might map to the maybe dopa manic system in the brain, but animals also have like needing and liking systems that are more sort of like effective attachment and then more of like homeostasis kind of survival need.  

Cameron    01:09:30    And they’re kind of like different valuation systems and they’re always kind of competing and playing off each other. So, you know, the, the thought might be that, uh, we need to build in more of these, um, lower level valuation, whether they’re sentiments, whether they’re needing, liking and wanting, whether they’re, uh, effective appraisals, emotional reactions to, uh, succeed in a lot of the places where, uh, current deep learning agents, especially when they’re released into kind of more open ended environments, uh, seem to, to presently fail pretty badly. And, and that’s a place where I think we still maybe need a kind of technological leap forward before we can make real progress. We seem to still be missing something.  

Paul    01:10:16    The, the danger is always present of turning the machine’s power off or something like that. Yeah. Where they have to balance that. Okay. So I’ll remind you of the other question, which was, um, about animal cognition and I’ll slightly rephrase it. So we tend to, you know, I, I map up my emotions and my thought processes onto my dog. My dog is sad. Mm-hmm <affirmative>, my dog is my dog feels fear. Right. You know, and these psychological constructs that we talk about and share through our own language and our own experiences as humans, and this goes back to like, you know, building human, like AI or human level AI, how much do we need to pay attention to potential domain general faculties of other organisms non-human organisms, whether or not we can put a name to them or say anything about what the actual experience is like for them. Mm-hmm <affirmative>  

Cameron    01:11:04    Are you, are you thinking again about like, uh, uh, faculties that humans don’t have that animals might have  

Paul    01:11:10    Not necessarily, but, well, should we let’s say imagination, right? Yeah. Would, would, is, is imagination a domain general faculty ontologically sound, right? Is it a thing that actually exists in the universe that a squirrel also has or could have, or is it that the structure of a squirrel’s brain? Yeah. And the, the, the way that, um, the architectures interact and the, the, um, neural activity interacts between modules in a squirrel’s brain mm-hmm <affirmative>, does it develop, you know, a slightly different faculty, something that is not quite imagination or memory, but might be something else, you know, mm-hmm <affirmative>, does that make sense? It’s a poorly sort of, let  

Cameron    01:11:54    Me, let me, let me, let me give a couple of examples. So I, I do think it could be the case that, you know, some animals lack some faculties that we have, uh, and other animals share faculties that we have, you know, ideally once you had the kind of abstract mechanistic understanding of the structure, uh, of the, uh, of the internal structure of the faculty that allows it to play the roles that it needs to play to do the computational work that it needs to do, you would be able to use that understanding to see whether other animals have that faculty or not, but looking at other, you know, it’s, it’s not that it’s kind of just a one direction thing that you go from the computational modeling, and then you go around and you check other animals to see if they have it looking at other animals can actually help us figure out what’s the right level of abstraction at which to cast the faculty’s nature as well.  

Cameron    01:12:46    An example, I often like to bring up here is like echolocation and cognitive mapping, right? So in animal cognition, there’s this there’s tons and tons of papers, like do, does X species have a cognitive map? You know, do bees have a cognitive map? Do rats have a cognitive map? Do chimpanzees have a cognitive map? Right. And in humans, there’s all this great neuroscience, you know, some won the Nobel prize for, uh, showing how the dentate gyrus and the hippocampus and Anhe cortex, uh, uh, cooperate together to allow us to build cognitive maps of our environment. Right. And based on that level of understanding, we have about, you know, how place cells and grid cells work. We can look then at other organisms and see like, well, should they have it or not? But if, if we didn’t have that level of understanding yet you and you might look at bats and say like, well, bats can’t have a cognitive map cuz they can’t see well enough that’s a case, right.  

Cameron    01:13:38    Where I would think there’s like a clear mistake being made akin to anthrop fabulation where you think like the only way you could have this faculty is by exhibiting it in the distinctively human like way, you know, whereas humans mostly have visual cognitive maps based on visual landmarks, but why couldn’t an echo locating organism have, you know, echo locating cognitive maps based on acoustic feedback, um, landmarks, you know, so, so that, that’s a kind of case where you can, you can help decide what’s really core to this faculty and its ability to do its computational work. Uh, and what’s, uh, just a kind of contingent way that it’s implemented in us by looking at other biological organisms and that can help you maybe decide what you should build into your artificial organisms as well.  

Paul    01:14:33    You have time for some forward looking content.  

Cameron    01:14:36    Yeah. Yeah. And I it’s, I mean, of course I think it’s actually related to a lot of the other stuff we’ve talked about. Um,  

Paul    01:14:40    Well, so yeah, I was about to say other, you know, non-human animals or organisms don’t possess language, although mm-hmm <affirmative> you said languages within the attention, uh, camp, of course, most animals have, well, you would assume have attentional mechanisms, right. Because they have to yeah. Figure out what to pay attention to in the environment top down and bottom up. Um, but, but they don’t have like the symbolic Al processing and I, we don’t need to go down the, um, whether language requires symbolic thought, et cetera. But, uh, I use that as a segue, um, thinking of symbols and the symbol grounding problem and representation. You’ve had a, a long interest in meaning mm-hmm <affirmative>. And uh, so we’re switching gears here to talk about your account of representational content. Yeah. Um, so I’m, I’m gonna kind of throw the floor open to you because I’m aware of our time and maybe we shouldn’t go down in too much under the weeds, but, um, I’m one, so maybe you can summarize <laugh> your, the forward looking content account with respect to both Telio semantics, which you’ll have to define, I know that your account involved is, is much more intertwined with learning, which is really interesting.  

Paul    01:15:49    Um, so that was a mouthful, but what, what is the forward looking content view?  

Cameron    01:15:53    Yeah, let me, let me motivate it a little bit. So I, I first got interested in this topic, uh, by looking at very similar debates and animal cognition. Okay. So, you know, one of the debates I looked at is, um, do animals have a theory of mind, which is, you know, maybe you think of it like a sub faculty of, um, social cognition, right. Which is the ability to attribute beliefs and desires to other agents. And so there’s a bunch of experiments showing that, you know, chimpanzees and I worked on some experiments with Ravens and a bunch of other animals can attribute at least perceptual states to other organisms kind of simple theory of mind. And then skeptics though, would look at that work and say, uh, no, those aren’t theory of mind, because I can come up with a simple associative explanation for those results. Right. Where it’s all in terms of observable cues, uh, that the animal was seeing at the time. Uh, and they didn’t actually, poit some underlying mental state, uh, like they had a theory of mind. Do you understand the basic opposition?  

Cameron    01:16:57    Yeah. So yeah, as a philosopher of science, the thing that I found very funny about these debates is that if you sat the two sides down and you said, okay, so what do you think the animal can actually do? And you describe a series of situations. They agree completely. They, they like agree on all the behavioral capacities and they agree on all the experimental data. And they have a very hard time coming up with a future experiment that could possibly arbitrate between the two positions that the animal does or doesn’t have a theory of mine. And that’s a place where the philosopher says like, maybe it’s actually a philosophical disagreement, right. Rather than an empirical one, or, you know, there’s not, I’m not somebody who’s like, there’s a big sharp divide between a philosophical theory. I think all philosophical theory should have at least some empirical content to them.  

Cameron    01:17:44    But you know, at least there’s a kind of semantic disagreement here. Right. Because if I put the disagreement, like you think this animal has a representation of a perceptual state yes. You think this animal does not have a representation of a percept. Yes. Okay. So that’s what you disagree about now, let’s talk about your theories of representation and they’d be like, well, you know, I’ve been around enough philosophers to know that that’s, that’s a sticky business. That’s deep water <laugh>. Um, so I’m, I’m reluctant to put it all on the table, but Daniel Elli, you know, uh, loving for this, like he’s willing to put all these cards on the table and say like, you know, look, uh, I think that they should have a representation that like really causally co varies with the underlying mental straight across like this very wide, maybe potentially totally open-ended number of different situations.  

Cameron    01:18:35    And that’s where I wanna say no, that’s anthrop fabulation right. Cuz humans don’t have that ability. Right. Mm. Humans can only infer, uh, that other humans have particular mental states on the basis of observable behavior. Like we’re not psychic. Right. So we also have to use observable cues that you could tell some sufficiently sophisticated associative story about if, you know, you had a complex enough model, uh, that could learn those cues and, and make the same predictions. So there was a, there’s a pattern there. And it’s now the pattern that I also see in artificial intelligence, uh, today between the skeptics and the proponents of kind of ambitious interpretations of what deep Mo deep learning models are doing, where you see some results, some experimental results, either from the animal or from a new deep neural network system where they do some cool behavior. And then the, the kind of enthusiast says, you know, ah, look, you know, it’s like object detection or it’s, uh, you know, it’s, it’s like imagination, you know, like Dolly twos doing some creativity, you know, is another word that gets thrown around.  

Cameron    01:19:42    Um, and then the skeptic says, no, that’s just statistics. Or that’s just pattern matching. Or that’s just, you know, linear algebra. That’s not really that thing. And you have to say, well, what’s really the criteria for having that capacity. And it always comes down to representations in one way or another. Right. So in the animal cognition debate, it came down to what is it to have a representation of a distal mental state, like a perception. I see some particular thing, or I believe some particular thing in this case, right? Like what the skeptics want in deep learning are core knowledge concepts, right? So it’s, what is it to have a concept of an object or what is it to have a concept of a cause these are again, debates that are ultimately cast in representational terms, right? So it doesn’t seem like you can arbitrate the dispute empirically unless the two sides agree on what it is to have a representation of that concept.  

Cameron    01:20:38    And that’s where if you ask them, you see that the two sides have different implicit theories of representation in particular, the skeptics game, right? Is they all they think they need to do is show a few cases where the other system makes some apparent mistake that they think humans wouldn’t make to say, aha, look, see, I expose the fraud, right? They don’t have that capacity, but humans make all kinds of mistakes too. Right? So if you wanna say like, well, humans have a true concept of causation that, um, deep neural networks don’t have even deep neural networks that are trained to solve causal inference problems. There’s all these famous studies that show that, you know, even Harvard undergraduates that just finished and got a high grade in a physics course, when you take them out of the classroom and you ask ’em these simple physics problems, they make these very elementary correlation, causation.  

Cameron    01:21:31    And I, uh, weird like, uh, impetus principle mistakes, uh, that it seems like if they’d just done a few calculations on a napkin, they could have gotten the right answer, but they, they still seem to have these mistakes like very deeply ingrained ’em that seem more applicable to a kind interpretable in terms of a kind of statistical theory of how things are going on. So it can’t just be making some mistakes, any mistakes that the ideal reasoner wouldn’t make, uh, rules you out of the realm of the agents that could have this concept, you know, whether it’s causation or a belief of, or a mental state or whatever. Uh, but then you actually have to do the hard work to say, okay, so which mistakes are disqualifying and which ones aren’t, and that’s where you need a kind of principle philosophical theory. I think right now, if you go back into the eighties in, you know, the, the kind of heyday of the theory of representation, uh, attempts to naturalize representation from like photo Milikin DKI, uh, Telio semantics, like you mentioned earlier, they all start from the perspective that this simple causal theory of content, where to have a representation of X, you have to have some neural state that perfectly causally co varies with X is false and obviously false, right?  

Cameron    01:22:55    Because we all make mistakes. Misrepresentation. In other words is just like a basic fact of mentality. Uh, there’s no concept we have that we have, we never make a mistake with respect to it. So we need some different principle that determines whether we have a representation of some particular content X that doesn’t require perfect use in other words, and all of the different Telio semantic theories are an attempt to pick some principle that doesn’t require perfect use, but still ascribes determinate context. Okay. I used, do you have a follow up question before I go deeper into it? Or are we okay so far? Cause I know these are deep philosophical waters.  

Paul    01:23:43    I know I I’ve read your work so I I’m keeping up, but you know, I, who knows, but uh, I, I don’t know what to ask you for clarification.  

Cameron    01:23:50    So yeah. And a lot of them, for example, were appealing to evolutionary theory, uh, or theory of information. So Dre’s theory, for example, which was my favorite and is called a Telio semantic theory, uh, it tries to decide, okay, so what’s the function of the representation. That’s how you’re gonna decide what is the determinate content of the representation without requiring perfect use. You need to figure out what its function is and the way you figure out its function. According to DRES ski is to look at the representations learning history. So he interprets learning as a function bestowing process. Okay. So what learning does is it like picks up what he calls indicators that are, you know, say a neural state that, you know, maybe in the perceptual cortex or whatever that happens to be activated whenever some particular, uh, thing in the environment happens that then at least in that particular circumstance.  

Cameron    01:24:48    So let’s say we’re in a, a brightly lit room and I have a particular neural state that, uh, lights up whenever I see my water bottle in this brightly lit room. Uh, and I learned that, you know, that water bottle affords drinking when I’m thirsty, then I might recruit that neural state through learning, you know, DRES ski talks about simple operating conditioning to control my drinking movements, my, you know, grasping and drinking movements. Okay. Like a rack can do that. Uh, but then turn out the lights, right? And in some sense, the water bottle’s still there, but that same neural state, maybe doesn’t fire, you know, I need to learn that the water bottle looks differently or I need to use haptic feedback or whatever to find the water bottle in the dark. But that doesn’t mean I don’t have a concept of a water bottle, you know, a representation of a water bottle, according to DRES ski, he can still say, well, that, that representation still indicates the water bottle. It still means the water bottle in his sense, because it has the function of indicating the water bottle, right. Because that’s what it was recruited to do in those previous conditions.  

Paul    01:25:54    So, so the representation is the function, or, sorry, I’m gonna jump in when I can, with the, the  

Cameron    01:25:59    Representation is just a neural state. It has a function it’s, it’s sort of like bestowed a function by learning when that representation is recruited to control some, uh, desire, satisfying movements.  

Paul    01:26:12    Mm-hmm <affirmative> okay.  

Cameron    01:26:14    Right. Yeah. So you, you kind of get the, the teleology from the, the agency aspect of it, right? The idea being that you’re, you’re a system with needs that need to be met, and you’re thrust into an environment where you don’t know how to meet those needs. And learning is a system that picks certain perceptual brain states or whatever, and bestows them with certain functions through this process of recruitment to control certain movements, uh, because it was successful in satisfying that need when it triggered those movements in those circumstances in the past,  

Paul    01:26:53    Well, in the past is key here, right? Because it’s a historically looking right way of talking about rep what a representation is, right? Because at every moment we are our best and per most perfect selves. And our representations are always backward looking in that respect.  

Cameron    01:27:09    Right. Uh, and that the backward looking aspect is the part of it that always bothered me. I thought Dre’s story was, you know, brilliant ingenious mm-hmm <affirmative> uh, and if you interpret his covariation condition in the past recruitment situation, strictly that is anything that Causely co varied with it in the past, all. And only that thing is the content of the representation. Then it solves this indeterminacy problem, which is a horribly difficult problem to solve. And it’s one of the only views. I think that actually solves this problem.  

Paul    01:27:41    Wait, whoa, wait, sorry, sorry. State the indeterminacy problem again. Or, or I, I, I can maybe state it and you can then correct me it’s that you can’t, um, or just correct me if I’m wrong. Um, you it’s it’s, you can’t look back and say which strand of possible historical contingencies led up to this, um, modern in the now content of the representation.  

Cameron    01:28:06    Yeah. So I mean, a standard way of painting the indeterminacy picture is, uh, you, you have a frog and you you’ve got a frog who sticks at its tongue to eat flies, right? So you wanna say, oh, well, the frog’s, uh, representation means flies. And then, you know, the skeptic says, oh, but what if I flick little BBS in front of the frog? Uh, and the frog’s tongue sticks out and eats those too. So what is the content of the representation, right? Is it, uh, is it fly because that was what, uh, caused, uh, the, the, that particular perceptual representation to control tongue guarding movements due to some evolutionary selection, you know, some of the more evolutionary oriented semantics there might say that, um, is it BBS because it now darts to select BBS? Or is it some, this is the really tricky one to rule out. Is it some more proximate constru that is shared with both, uh, flies and BBS, like small dark moving spec? Right. So in some sense, you wanna say like the fly is the more teleologically satisfying answer, cuz that’s what actually solves the need, but anytime it indicated fly, right. It also indicated small dark moving spec.  

Paul    01:29:17    And what does, what does Dre ski say about the frog?  

Cameron    01:29:21    So it, it cha I mean, one of the, it depends on which time slice of Dre ski, we’re talking about one of the things, so yeah. Yeah. So like many great philosophers, you know, his views kind of evolved over the years. Um, and he, he played with different notions of indication. And so there’s a strict notion of indication. So it’s only what it, you know, actually co varied with in that particular circumstance. And there’s another more open-ended notion. So it’s anything that sort of like could have co-ed with, um, that had this. And so that’s the particular tension that I wanted my view to solve where you both need. So on Dre’s view, let me say, just to clarify how misrepresentation is possible on Dre’s view. It’s, you know, it’s, this is the part of the story that’s implausible to me is you can miss, you can have a representation of the water bottle, even when your representation later, doesn’t perfectly Causely Cove with the water bottle.  

Cameron    01:30:18    That is you make some mistakes. Either you grab something that looks like the water bottle, but’s really a, you know, duplicate or you fail to grab the water bottle when it’s right in front of you, because the lights are out. Those are two types of mistakes, right? On Dre’s view. It’s okay to make those mistakes that doesn’t like, mean you’re not competent with the concept of the water bottle, because it has the content of indicating water bottle given its recruitment history. And it’s only later when you’re out of that same environment of recruitment that you make the mistake, right? So for Dr. You like, you can’t make a mistake during recruitment. It’s, it’s like logically impossible and then mistakes are possible only later when you move to a different environment that has different contingencies. And that both aspects of that always struck me as wrong.  

Cameron    01:31:02    You know, as ingenious as the view was because we make mistakes all throughout learning. You know, if you watch a kid learning to do something and we recognize them as mistakes while we’re learning. And in fact, it’s crucial that we recognize them as mistakes while we’re learning, cuz that’s how learning works. Right? If you look at the acquisition of expertise, for example, in any domain, you see that an agent that just kind of blunders along with trial and error is never gonna become an expert at that subject. Right? You have to attend to the causes of your successes and failures to try to like actively diagnose where you went wrong and then correct your, what we, I might call conception of the thing that you were trying to interact with as a result of the mistake that you made to sort of get closer and improve your use.  

Cameron    01:31:49    And that’s how learning works all throughout the trajectory. And so I thought any of you that sort of like couldn’t make sense of that is just misrepresenting the way that learning works as a matter of empirical fact. So what I wanted was a view that would, um, save the, the parts of Dre’s view that were ingenious in work, but not have misrepresentation be derivative of this kind of artificial constru of learning, where you don’t make mistakes during the recruitment history. And then you only make mistakes later when you’re in a different environment that has a different contingency structure from the environment in which you learn. And so the forward looking view is supposed to help with that. Now, you know, the forward looking story just to give, make a long story short is you now don’t ground content descriptions in the agent’s learning history or evolutionary history for that matter.  

Cameron    01:32:42    You rather ground them in the agent’s own dispositions to respond to representational errors. Okay. So, you know, go back to the frog. The idea is if I’m in doubt, whether the frog’s representation means fly or BB, I expose the frog through some, you know, open-ended interactions to lots of flies and lots of BBS and see what it does now, if it stops responding to the BBS over a long enough period of time, then I wanna say it treated it like a mistake to respond to those BBS in the learning trajectory. And so it was always aiming at something else, you know, maybe fly if however it continues to eat Bebb until it, you know, gets a belly full of lead. As I think frogs actually do, then this view would say, well, it lacks the, the capacity to revise that representation to better indicate flies.  

Cameron    01:33:39    And so it actually does mean something that encompasses both fly and BB like small dark movie spec. And I think some animals are like that and some animals aren’t right? Like some animals are more flexible where they can learn to better indicate, uh, the reference of their concepts, uh, through further interaction with the environment maybe potentially open ended. And I, again, I like to think of expertise here where you can learn to get better and better at, at indicating something, you know, for decades. Uh, so it’s not like there’s some definitive point at which learning stops, but if you’re ever in doubt as to what the function of this representation is in the cognitive economy, we should be deferring not to some magic recruitment period, but rather to the agent’s own dispositions to detect and respond to representational errors. Now that’s that, I like that view also because it suggests ways to do experiments when you’re in one of these situations where you’re not sure whether say, you know, the Raven experiment I talked about earlier, for example, was inspired by this type of view, uh, whether you’re not sure whether the Raven has a representation of another Raven’s, uh, perceptual state or, uh, merely some, a current cue for that perceptual state, like gays, like direction of, of the, the Raven’s head or eyes, which was the, uh, the skeptics preferred explanation for all the data that had come before.  

Cameron    01:35:04    Right? They’d say, well, the Raven doesn’t really understand anything about seeing, it’s just using this simple, uh, associative cue of where the Raven’s eyes, the other Raven’s eyes are pointed. Uh, and so it doesn’t need to understand anything about mentality. So what it says is you need to now make a further experiment where the Raven has the opportunity to learn about some other cues that indicate seeing that are not gazes. Like, and that’s the, that’s the type of experiment that we did and the Ravens seem to pass. Uh, so the thought would be according to the forward looking view, if the Raven can, uh, recruit a potentially open-ended number of other, uh, queues to indicate that that same shared disposition and to which it responds in the same type of way and generalize its previous behaviors to these new queues, then that suggests that the Ravens representation was all along aimed at, you know, perceptual state, like seeing rather than just gays, which it would stop responding to if gays stopped indicating the thing it actually cares about seeing.  

Cameron    01:36:04    Um, and it further suggests a way now in machine learning. I think, you know, I, I don’t talk about this in the book because I’m worried I would scare off all the machine learning researches. If I started going all this theory of representation stuff. But you know, it suggests a way to arbitrate these dispute where, you know, Gary Marcus goes on Twitter and he says, you know, look, I gave Dolly to the prompt, you know, three blue squares in front of three red squares and, you know, look, it got the squares in the wrong order. It painted him the wrong color. Well, if you gave Dolly to the opportunity to learn about those situations and whether it got the answer right or not, and you gave it that kind of training. So, right. Like a lot of people who are responding to this to say, well, this type of situation, or these types of relations were just not in the Dolly two S uh, training.  

Cameron    01:36:51    So why think that it would be good because of the kind of captions that people put on pictures on the internet or whatever. Why would you think that it would be able to succeed at that? If it could learn to, uh, better indicate those relations through further rounds of training, online training, especially where it’s detecting its own errors. Right? So, you know, if it’s all just supervised learning, I might be more skeptical of applying the forward looking story and saying that it actually has the representation, but if we built systems that have the kind of self supervision of the sort that like Lacoon is, um, has been recommending for the last few years, then I would start to say, you know, look, we have now a way to ask the system, what it really has a representation of by seeing whether it can get better at doing this by detecting its own representational errors and improving its use as a result, we still don’t really have that, that interaction and, and sort of, uh, unguided, self supervised learning, or, or at least we don’t have, um, deployable state of the art systems doing that yet, but I’m sure it’s just around corn.  

Paul    01:37:47    Yeah. But, but if you took a, um, thinking about like an Oracle, let’s say you train up a machine learning, um, model and it answers everything, then you freeze its weights. Yeah. So it can’t learn and it answers, you know, a hundred questions correctly. Does that have content then? Would you say that has content because it doesn’t have to doesn’t even, it doesn’t get a chance to nor does it need to improve because it’s predictions matched its its output. And then what about the, uh, the old guy next door who you, you can’t teach him anything here and he knows everything, maybe old dual older people who become more brittle or more, um, obstinate in their opinions or do they have less content or are they lacking content in their representations? Is that what I should tell old man, ING,  

Cameron    01:38:35    Gosh, gosh, you’re gonna get me in trouble. But, but yeah, I think actually, oh  

Paul    01:38:39    Wow.  

Cameron    01:38:40    Probably that’s okay. They at least have different contents, uh, than you do. Okay. So I mean, the way I look at it is it, it’s not like actual futures that matter, so it’s not like what you’re actually gonna do in the next 30 years. It’s rather, uh, based on what you know now and the learning dispositions that you have currently, if you had an open ended exploration of this environment, how would you respond or how would we predict that you would respond over time? That’s that’s what really matters for the, the content description on my view. Um, so a more flexible, I, I think this is the right answer. I’m much more flexible learning system can have much more variated and specific contents than a much less flexible learning system, you know? Sorry. Um, <laugh> but you know, in some of these cases too, I like, it looks maybe to us, like somebody’s making a mistake when they say like, no, I’m fine.  

Cameron    01:39:31    And like you pointed out to them and like, ah, no, it’s fine for me, who are we to say they’re making a mistake. Right? Like maybe that’s really the content that they were after. Right. Um, right. And, and again, I think that’s ultimately why deferring to the agent’s own capacity and dispositions to revise is the right answer because that’s, what’s gonna make the right predictions about their future behavior. Right. I think that content descriptions should be earning their keep, uh, as, as psychological deposits, right? In virtue of making empirical predictions that are born out by, uh, experiments or, or interactions with the, the agents that have those contents. Right. And so for me that that’s, that’s again, sort of what always bothered me about the backward looking gamut is that you’re making lots of predictions about what this agent is gonna do. If you ascribe contents that are like too ambitious for its own learning capacities, you’re saying like, it’s, you’re predicting, it’s gonna eat a bunch of flies that it just like, can’t see, or you’re predicting, it’s not gonna eat a bunch of BBS that it, you know, repeatedly is going to eat until it, it fills its stomach completely.  

Cameron    01:40:41    Um, whereas the forward looking gait again, it makes it possible to make mistakes, but the mistakes should not be systematic right. In the sense that they should not persist indefinitely. If you make a mistake, it should, the agent should treat it like a mistake and be less likely to do that thing again in the future, in the similar circumstances. And I think that’s ultimately why content descriptions are useful to scientists, not just to philosophers is because they help us make a kind of prediction that it seems like we couldn’t make otherwise, you know, you can call it whatever you want, but you would need to invent a new type of PO to predict how a dynamic system. And maybe I’m starting to sound like big, hard here, but like how a dynamic system is gonna interact with its environment over time, uh, where it’s, it’s not a static thing. Neither the environment nor the agent is, is static. You need to, you know, I often say, if you wanna shoot an arrow, you need to aim where the target is headed, not where it’s been. And, and I think that’s the way to look at content descriptions and psychology and, and machine learn.  

Paul    01:41:48    All right, Cameron, I’ve taken you long enough. I appreciate your patience with me. And uh,  

Cameron    01:41:53    No, thanks. Useful.  

Paul    01:41:55    Oh, we’ve gone down a long road. Yeah. So yeah. Thanks for being on.  

Cameron    01:41:59    Yeah. Thank you.