Brain Inspired
Brain Inspired
BI 181 Max Bennett: A Brief History of Intelligence
Loading
/

Support the show to get full episodes, full archive, and join the Discord community.

Check out my free video series about what’s missing in AI and Neuroscience

By day, Max Bennett is an entrepreneur. He has cofounded and CEO’d multiple AI and technology companies. By many other countless hours, he has studied brain related sciences. Those long hours of research have payed off in the form of this book, A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains.

Three lines of research formed the basis for how Max synthesized knowledge into the ideas in his current book: findings from comparative psychology (comparing brains and minds of different species), evolutionary neuroscience (how brains have evolved), and artificial intelligence, especially the algorithms developed to carry out functions. We go through I think all five of the breakthroughs in some capacity. A recurring theme is that each breakthrough may explain multiple new abilities. For example, the evolution of the neocortex may have endowed early mammals with the ability to simulate or imagine what isn’t immediately present, and this ability might further explain mammals’ capacity to engage in vicarious trial and error (imagining possible actions before trying them out), the capacity to engage in counterfactual learning (what would have happened if things went differently than they did), and the capacity for episodic memory and imagination.

The book is filled with unifying accounts like that, and it makes for a great read. Strap in, because Max gives a sort of masterclass about many of the ideas in his book.

0:00 – Intro
5:26 – Why evolution is important
7:22 – Maclean’s triune brain
14:59 – Breakthrough 1: Steering
29:06 – Fish intelligence
40:38 – Breakthrough 3: Mentalizing
52:44 – How could we improve the human brain?
1:00:44 – What is intelligence?
1:13:50 – Breakthrough 5: Speaking

Transcript
[00:00:03] Max: What if we looked at all of the comparative psychology, which has been an incredible amount of research even in just the last decade? We take that and compare it to all of the work in evolutionary neurobiology, and then we map all of that to concepts in artificial intelligence. Is there a first approximation that reveals itself when we compare these three things?

If you think about sources of learning through the evolutionary story, you can almost see the entire breakthrough, five breakthroughs model through the following lens. It has been a progressive expansion of getting new sources of learning for brains.

[00:00:46] Speaker C: Good day, brain inspired listeners. Welcome to Brain Inspired. I’m Paul today, my guest is Max Bennett. Max has co founded and or ceoed multiple AI and technology companies, making him an entrepreneur by many other countless hours. However, he has studied brain related sciences. Those long hours of research of his have paid off in the form of this book, a Brief History of intelligence, Evolution, AI, and the five breakthroughs that made our brains. And you’ll hear Max talk about this in a moment more. But it’s, I think, worth repeating here that three lines of research formed the basis for how Max synthesized knowledge into the ideas in his current book, findings from comparative psychology, which is roughly comparing brains and minds of different species, findings from evolutionary neuroscience, how brains have evolved, in other words, and findings from artificial intelligence, especially the algorithms developed to carry out functions. So Max assimilated lots of research from those three lines of research to form what he calls the five breakthroughs that made our brains. And during our discussion, we go through most of the breakthroughs, if not all of them, in some capacity. And a recurring theme is that a single breakthrough that Max cites may explain multiple new abilities. So, for example, the evolution of the neocortex may have endowed early mammals with the ability to simulate or imagine what isn’t immediately present. And this ability might further explain mammals capacity to engage in vicarious trial and error, so imagining possible actions before trying them out, the capacity to engage in counterfactual learning, which is what would have happened if things went differently than they did, and the capacity for episodic memory and imagination. So the book is filled with unifying accounts like that, and it makes for a great read. And you should strap in because Max gives sort of a masterclass about many of the ideas in his book. You can learn more about Max and a link to the book in the show notes at Braininspired co podcast 181. And there you can also learn how to support the show on Patreon, if you so desire. Thank you, as always, to all my patreon supporters. Okay, here’s Max.

[00:03:15] Paul: I cannot be the first one to say it.

So the book is called a brief history of intelligence. Did it start off brief and it grew?

[00:03:30] Max: It did start off briefer than it ended up becoming. It’s funny, the original draft was, like, 350 pages, so it took a lot of culling to bring it down.

[00:03:41] Paul: Okay.

[00:03:41] Max: And I say 350 pages, not including the end notes or anything. So, yes, I think a fair critique is. It’s not the most brief.

[00:03:47] Paul: Yeah. Okay. So I didn’t know if that was, like, just an ironic title or.

[00:03:52] Max: I think we kept it a little bit ironic. I think the irony underlying it is for covering 600 million years of brain evolution, it is comparably brief.

[00:04:03] Paul: Yeah. So, I mean, you’re not a quote, unquote, practicing neuroscientist.

How would you describe yourself?

[00:04:13] Max: Labels are always funny things, I’d say. I’m an AI entrepreneur. I mean, my career has been in commercializing AI technology, and over the last four years, I’ve become independently really fascinated with neuroscience. So I’ve sort of self taught and then ended up collaborating with a lot of neuroscientists over email and then publishing some of my own papers and getting loosely involved in some labs. But, yes, I don’t have the sort of classic training in neuroscience.

[00:04:41] Paul: Yeah, it’s interesting.

I recently had a philosopher, Laura Grudowski, on the podcast, and she specializes in fringe theories, essentially, like, looking at the history of people who have started off as fringe people and or theories who have started off as fringe and then eventually become mainstream. And she makes an argument that we should pay attention to all theories, fringe and or not. But one of the interesting characteristics of people who often start as, quote, unquote, fringe, which is a fringey word, is that they are experts in one domain and then come in and visit a new domain. Right. And then kind of inject, like, fresh ideas. And I know one of the people that comes to mind is Jeff Hawkins, and I know that he’s been an influence on you and me as well. And so he kind of foots that bill. So how have you found it as a sort of outsider?

Because I know that you’ve had a lot of running conversations with a lot of well known neuroscientists and leaders of the field, et cetera. How have you found their acceptance of you?

[00:05:58] Max: I think it’s been really wonderful. I mean, I think the refreshing thing, perhaps, is I have no ulterior motive. I’m not trying to get ahead in some academic career. I’m not trying to get tenure. I’m just curious, and I’m just fascinated in the topic. And I think that for folks who are in the sometimes political academic world, I think folks find that refreshing. And I’ve been really humbled and honored at the degree with which folks have been accepting and taken me under their wing. I mean, I take a student mindset like, I want to learn from folks. So, yeah, I’ve been really humbled. The folks, Joseph Ledux, Carl Firsten, Eva Jablanca, David Reddish, who have just become mentors of mine, we’ve started collaborating. So I think it’s been really awesome how open the community actually has been to a curious outsider.

[00:06:57] Paul: So the book is delightful. It’s an extremely pleasant read. It’s patient and careful and well written. So congratulations. Thank you. Why evolution?

I think what I want to ask you is, so you have kind of an engineering mindset, perhaps, right? And there’s a lot of talk about reverse engineering the brain to understand intelligence, and we’ll talk about maybe what intelligence even is.

But how did you get on the track of thinking that evolution was important to understand in order to understand intelligence?

[00:07:34] Max: Yeah, I think folks in technology in general have a bent towards looking at things through the lens of ordered modifications. So, for example, when I take an entrepreneurial lens at looking at new business areas and new technologies, it’s very common to look at things as a technology ecosystem, and we try to understand what new technology is introduced. So the Internet is introduced, and how does that change the overall system? Or we all of a sudden have exponentially growing compute. How does that change the ecosystem?

Or we think about product strategy. You think about Tesla’s product strategy. We start with a high end roadster, then we move to model s, then we move to model three, et cetera. So I think there’s, in general, a cognitive bias towards trying to understand complexity through starting from a simple place and then observing modifications to how we got today. I think that’s just a cognitive bias I came with.

And so I think presented with a curiosity to understand the brain and then the maelstrom of complexity that is the human brain, I think I perhaps had a bent to try and take a similar approach.

And so MacLean’s Triune brain is the most popularized evolutionary framework for understanding the brain, but it’s been almost entirely discredited. I think, in some ways unfairly.

It’s become too popular to the sense that people have over indexed on what MacLean really meant. He really meant it as a first approximation.

[00:09:14] Paul: Can you just overview picture of the triune brain. For the people who don’t know.

[00:09:20] Max: So MacLean’s triune brain is the idea, and almost everyone’s heard about it in popular culture, but his idea is that the human brain is made up of three separate brains that track the evolutionary history of how the brain came to be. So, at its core, we have this. The brain stem largely is our reptilian brain. It is the origin of our sort of survival instincts and basic reflexes. And it’s called the reptilian brain because the argument is that if we look into the brain of a lizard, all they have is the presumed areas of our reptilian brain. Then there is sometimes called the limbic system, or what MacLean called the paleomalian complex, which is the structures that evolved in early mammals.

And he argued that that was the origin of our emotions. And then above that is the neomalian complex, which is parts of the brain that evolved in later mammals, neo being new. And that’s the origin of cognition. And so the idea is the human brain is made up of these three layers, and each of these layers has its own sort of mechanisms for making behaviors. They’re at war with each other, and it tracks our evolutionary history. So early mammals, like a rat, has only the reptilian brain and the paleomamilian complex, and then us and other apes have all three of these systems.

And it’s largely been discredited for a few reasons. One, the modern work that’s been done in evolutionary neuroscience does not track this. So if you look in the brain of a lizard, we see a bunch of phylogenetically related structures to limbic system. So it’s not the case that they only have the quote unquote reptilian brain. The second issue is these three functions, instincts, emotions, and cognition, do not, in fact, delineate cleanly across these three structures. So there are cognitive functions in the supposed paleomalian complex, perhaps also in the brain stem. Emotions emerge clearly from neomamalian structures as well. So these functions don’t delineate cleanly. But the biggest critique for me, coming from a more engineering perspective and trying to understand mechanistically how the brain works, is this model doesn’t really give us anything of a roadmap for how to reverse engineer how the brain works.

[00:11:44] Paul: Right?

[00:11:44] Max: So it comes from the lens of psychology, which is we’re talking about instincts, we’re talking about emotion, we’re talking about cognition. This doesn’t have any clear mapping to concepts in the world of artificial intelligence to give me any starting point for how to understand mechanistically how the brain works. So throwing out the MacLean triune brain, we have nothing to fill that void, right? So what’s been left is just, oh, it’s just really complicated, and we don’t know which is true.

[00:12:12] Paul: It’s true.

[00:12:13] Max: It is true. It is true. But I do think there is value in first approximations, because the human brain does not have the cognitive capacity to fathom a billion different nodes in an evolutionary journey, if we even had that. So, first approximations are useful if they have explanatory power, and if we understand that they’re not perfect, right? So it’s important to understand that. So I went on this long journey, originally for myself, and then I stumbled on what I thought ended up would be useful for others, and hence why I shared it. But what I wanted to do is say, okay, what if we looked at all of the comparative psychology, which has been an incredible amount of research even in just the last decade, and so, in comparative psychology, we have all of these intellectual faculties across different species, right? So we do studies on fish, we do study on lizards, we do studies on mammals, primates, et cetera. And then we take that and compare it to all of the work in evolutionary neurobiology, meaning, which has also been an exploding field, where we now have a pretty good understanding of the order of brain modifications over the evolutionary history of the brain. And then we map all of that to concepts in artificial intelligence, meaning we understand what capacity algorithms work and do not work. And so that puts a constraints on how we can understand how the brain works. And is there a first approximation that reveals itself when we compare these three things? And what I found is the answer to that was a resounding yes.

There is a Worthwhile, simplified first approximation where, if we compare brain structures that evolved at certain places in our evolution and the seeming emergence of certain new behavioral capacities, what ends up happening is a suite of behavioral capacities that seem disjointed. So, for example, in mammals, and I will get into this in more detail, I’m sure in mammals, there’s good evidence. Compared to most non mammalian vertebrates, like fish, mammals uniquely have vicarious trial and error, meaning mammals can imagine possible futures, episodic like memory and counterfactual learning. These are three seemingly very different capacities, and the only new brain structure we see in mammals really emerge is the neocortex. But all of these can actually be understood as emerging from, really, one new function, which is the ability to simulate the ability to imagine a state of the world that is not the current one. And you can apply that to imagining futures, to rerendering pasts and considering alternative past choices. And that’s just one example of how, when we compare all three of these sort of fields, comparative psychology, AI, and evolutionary neurobiology, what seems like disjointed litanys of new abilities emerging, and brain structures actually get really, well, I think, as a first approximation, aligned. So that was the core idea of the five breakthroughs model, which is these five fundamental steps in brain evolution, which I found really fun and fascinating.

[00:15:29] Paul: Yeah. So simulating would be the third of those five. And part of the reason why the book is so enjoyable is because everything is so ordered and neat, and so one has to win. I mean, it must have taken you so much time to sort of categorize these things as you’re going through. I, as a reader and as someone who studies the brain, I’m curious, like how you came up with five. We live in a world of, what do they call listicles, right?

And we want to put things in these neat and ordered lists, and you’ve done that. But then each of the five, you unify, like you just did, with simulating three different things, as one of the breakthroughs explains these sets of things.

So do you want to just go through the overarching five, what you call breakthroughs over time?

[00:16:20] Max: Sure.

Well, first, it’s important to acknowledge that first approximations are exercises in trying to decide what features are important or relevant in your approximation versus not. So, for example, to add complexity to the model, which is fair, breakthrough two is what the brain structures that emerged in early vertebrates. Breakthrough three is the brain structures that emerge in early mammals. It is, of course, ridiculous to claim that there were no brain modifications between the first vertebrates 500 million years ago and the first mammals 150,000,000 years ago. But what is most surprising, looking at the arch between those periods, is how little actually changed. So it’s not to say that nothing changed, but it is to say, if we’re going to draw a first approximation, it is surprising, over that long stretch of time, how there weren’t many meaningful brain modifications. And then, with mammals, we see this dramatic new structure emerge, the neocortex. But yes, you’re absolutely right. First approximations are exercises in deciding what is relevant to include versus what can we accept fewer variables for simplification at the cost of explanatory power?

Okay, the five breakthroughs starts with steering, which for those technical in the audience would be taxis. Navigation. So the first brains, a good model organism, for this is a nematode, or early bilatarians. And before bilatarians, there were no brains. So the first animals with neurons were probably most akin to a coral, which had a nerve net. So neurons existed, but there were no brains. And so it poses the really big question, why did brains evolve at all? Why do we need this sort of centralized cluster of neurons? Why can’t we just have a nerve net? And if you look at really simplified bilatarians, such as C. Elegans, which is the most famous species of nematode, what’s so important about the centralized brain is it enables you to make very clear trade offs. So if you look at how a nematode navigates, it only has 302 neurons, cligons does, and it clearly does not model the world it doesn’t see. It has no lens shaped eyes. It only has photosensitive neurons that can detect the presence or lack of light or the presence of a certain smell. And yet it almost always will find food. If you put in a petri dish with food, and if you put a predator smell, it will always find its way to get away from it. And so how does it do this? Well, taxis navigation is this incredibly clever algorithm for navigating in a world without understanding the world. And the way it works is this. If you have a body plan, which a bilaterian, I guess I should explain what a bilaterian is. A bilateral body plan is a body plan with symmetry across your central plane. So if you drew a line through a human, our right side, on average, is symmetrical with our left side. This is not the case for a jellyfish, which has radial symmetry across a central axis, so it’s radially symmetric. And all navigation systems, from worms to cars to planes, all have bilateral symmetry. And that’s not a coincidence. From an engineering perspective, it is much more efficient to build a navigation system by giving it the ability to move forward and backwards. So have something that’s optimized for forward momentum and then have the ability to just turn. So you have a plane. You could imagine building a flying structure where it has engines and radially symmetric around it, where it hovers and can move in any direction. But that would be way less efficient than saying, let’s just have something that moves forward and then have the ability to turn.

[00:20:16] Paul: But a rocket is a bilaterian bilateral, how do you say it? Bilaterian, bilatarian, and also radially symmetric. And a jellyfish as well. Right?

[00:20:29] Max: So a rocket. Most rockets are actually bilaterally symmetric. Not all of them, but most of them are where they have. If you look at the fins, they’re actually going to have fins like that. But you’re right, you can build a radially symmetric system.

You can.

[00:20:46] Paul: Just a ticking point, it’s not important. But.

[00:20:49] Max: No, you’re right, you can. But it is interesting that most navigation systems are more efficient when they’re designed with bilateral symmetry. Yeah, and so if we go back to the very first bilatarian, which we use nematodes as a model organism, taxis navigation and bilateral symmetry go hand in hand. So the way this algorithm works is if you have sensory neurons that detect increasing good things, so that can be the increasing concentration of a food smell or the decreasing concentration of a predator smell, that means things are getting better. And then you have another set of neurons that detect decreasing good things or increasing bad things. So they have a sensory neuron that detects the decreasing concentration of a food smell or the increasing concentration of predator smell. And those connect as two simple things. A motor neuron that drives forward momentum is activated by the increasing good things, and a motor neuron that drives turning is activated by decreasing good things or increasing bad things. And if you just have something that does that, it will eventually, through turning away from bad things and turning towards good things, find its way towards the source of a food smell, or find its way away from a predator smell. And what’s so clever about this is it was available to early animals, which going from a jellyfish to a malian brain would have been evolutionarily impossible. You can’t make that jump. But going from a jellyfish to a simple brain that simply turns away from bad things and towards good things was available. And so the reason you need a brain for this is you need to be able to make trade offs. So you can put a nematode, for example, in a petri dish. This is some of my favorite nematode experiments. And put a copper line down the barrier, down the center of the petri dish. And for whatever reason, nematodes steer away from copper because it’s somewhat toxic for them. And the other side of the copper barrier, put a food smell.

[00:22:48] Paul: Okay?

[00:22:49] Max: And what’s so cool is nematodes will make trade offs as to whether or not they will choose to cross this barrier. And it entirely depends on two things, the relative concentration of the food smell and the copper barrier, and how hungry they are. The more hungry an emotode is, the more willing they are to cross the copper barrier. And the higher the concentration of the food smell, the more willing they are to cross the copper barrier. But this means that they’re making trade offs in their brain. It means that the neuron that’s getting excited by the food smell is in competition with the neuron that’s getting excited by copper, which is trying to get them to turn away from it. In order to do that, there needs to be some common neural machinery to compare these things. And when we look at an nematode brain, we’ve actually mapped exactly that circuitry. So this is why, and this isn’t uniquely my theory, a lot of people would corroborate this idea that the first brains evolved for this sort of central comparison trade off mechanism in taxis navigation.

[00:23:50] Paul: Can I interrupt and ask, I should know this.

Do single cells also do that? I don’t know if single cells. Right. Okay, so I keep coming back. So as we learn more and more about organisms, we learn how smart single cells can be, right? And they can learn, and they don’t have brains. So is there something special about.

They don’t have neurons, right? So is there something special about a neural net versus a single cell who could perform that same feat?

[00:24:21] Max: So what’s, I think, almost beautiful about life in the universe is how it’s founded on randomness, which is just like random mutations and yet common things are repeatedly recapitulated, which is just this almost beautiful fact about life. Where taxis navigation exists in almost all single celled organisms. That’s how they move around. It’s the same principle.

But when you have a large multicellular organism, the mechanisms of taxis navigation, which is protein receptors that drive changes in protein propellers, doesn’t work. You can’t move a huge multicellular organism like a nematode, which has millions or more cells using those basic protein propellers. So the same algorithm was recapitulated in completely different medium, which was sensory neurons and muscles to enable a large multicellular organism to move around. But the algorithm is the same, which I think is really fascinating.

[00:25:25] Paul: Okay, sorry to interrupt. Sorry to interrupt.

[00:25:27] Max: No, it’s great. So from steering one key component from an AI perspective that evolved in these. Well, actually there’s one cool thing that I think is not often appreciated that I find really fascinating about taxis navigation and nematodes. And then I’ll go to the next step. So if you look at nematodes and you look at dopamine and serotonin neurons. So there’s been so much talked about these two neuromodulators and we know they have really interesting effects across vertebrates, humans.

But nematodes give us a window into what was the first function of dopamine, which we think about as a reward signal, which drives seeking and pursuit. And serotonin, which we know has some loose association with satiation and satisfaction and delay of gratification. And if you look at a nematode, the difference in these two neurons, at least there’s multiple. But the difference in the fundamental function of these two neurons in nematodes have, like, another beautiful synergy with those two ideas. The dopamine neurons in a nematode are sensory neurons that detect bacteria, food around the nematode. And it drives a behavioral state of exploitation or dwelling. It makes it slow down and turn and do local area restricted search, which is a primitive version of, like, wanting and seeking. The serotonin neurons are sensory neurons in the throat of a nematode. It detects when it’s actually consuming food, and it drives satiation, meaning the pausing and resting after eating enough. So the dichotomy between dopamine seeking pursuits, nearby rewards and serotonin, the actual receiving of rewards and satiation and satisfaction, goes all. You can see that algorithmic blueprint all the way in the very first nematode, which is sensory neurons for nearby good stuff and sensory neurons for good things already happening. And I just think that’s so beautiful. Obviously, those two things have been elaborated and do more complicated things in human brains, but that basic template we still see across the animal kingdom. Okay, but from an AI perspective, what’s the fundamental thing we get from steering is a reward signal.

With steering comes the categorization of things in the world into good and bad. And what’s interesting about a nematode is, unlike a human brain, the sensory neurons of anematode directly encode what’s called valence, which is just the goodness and badness of something. It’s not the case in the human brain. The sensory neurons in your eye do not encode valence.

They do not detect whether something you see is good or bad. Valence is encoded later in the brain in its interpretation of whether what it just observed is in fact good or bad. But the sensory neurons of anematode are not like that. They directly encode whether something is good or bad. So the sensory neuron that detects increasing food smells directly drives forward momentum.

So we have these reward signals when we move forward to the first vertebrate brains, what we see is the emergence of reinforcement learning.

And reinforcement learning is this idea of learning arbitrary behaviors through trial and error. And there’s a really rich history of reinforcement learning in AI. And we’ve started to merge this. There’s been a wonderful sort of marrying between neuroscience and AI, which I think probably a lot of folks on your podcast are familiar with, with temporal difference learning being an algorithm that we have found in vertebrate brains. We talk about this in human and monkey brains. But temporal difference learning, all the evidence suggests, also exists in fish brains. You write a lot about fish.

[00:29:09] Paul: I found that very interesting, that you were so studied up on fish, which is not sort of, not part of the normal group of organisms to think about in terms of brains. So just an aside.

[00:29:24] Max: Yeah, well, I think what’s so interesting about fish is when we want to understand what emerge in early vertebrates. And the reason why I think understanding what emerged in early vertebrates is so important is because I think we misattribute the function, the early function of the neocortex, because we don’t actually look at what existed prior to the neocortex. So, so much of studies are done on rats and monkeys, which have obviously, the brain structures of mammals. But other vertebrates do a lot of really intelligent things without a neocortex. And so if we realize that a lot of the things that we attribute to the neocortex, such as recognizing complex objects in the world, are done readily by a fish that does not have a neocortex and actually has structures that were precursors to the neocortex, it begs a question, why did the neocortex evolve?

What was the driving adaptive value of a neocortex? Because if the function we’re attributing to it existed readily prior, then it’s a hard argument to say that the reason we evolved neocortex was to recognize things in the world.

And so I found fish studies, which I agree are not given, I think, enough credit, really useful in trying to delineate what was the sort of behavioral abilities that existed in early vertebrates and what really changed in mammals. And so there’s a bunch of fish.

[00:30:52] Paul: Textbooks, is that right?

[00:30:54] Max: Yeah, there’s a lot of fascinating work on it. For example, we think about the neocortex as enabling us to do really great object detection, right? And for example, mammals are really good at one shot identifying an object despite changes in rotation or changes in scale. And that’s something we’ve worked really hard in AI to get convolutional neural networks to do. In a lot of ways, mammals are still better at one shot identifying 3d rotation than cnns. And we don’t really understand how mammal brains do that. But then I ask the question, okay, can fish do that? And the answer is yes. A fish brain can one shot identify an object despite being rotated in space?

Fish can identify the same face. You can train a fish to squirt water on a specific human face and get a reward. And if you show a picture of that face rotated, it’ll still go to the same face. You can do the same thing with pictures of different creatures. It identifies a rotated frog in one shot, despite it being totally different, which a CNN does not do. You need astronomical amount of data to get a convolutional neural network to identify rotated objects, which is interesting because a fish brain has no neocortex, but they do have the precursor to a precursor to the cortex called the pallium. And if you go into a fish pallium, you see at least three structures, which are.

The pallium is three layered structure akin to our olfactory cortex hippocampus. And they actually have phylogenetic origins to the same thing as our hippocampus, our paleo amygdala, the cortical amygdala, and our factory cortex. So if you look at a fish pallium, they have a ventral pallium, which is akin to the cortical amygdala. They have a medial pallium, which is akin to our hippocampus. And they have a lateral pallium, which is akin to our olfactory cortex. And so somehow, out of this structure, or even more primitive structures like the tectum, they are able to real time identify rotations and objects.

So I think what becomes really interesting to jump ahead, I guess, to mammals, is if fish are able to do that, and they can clearly learn arbitrary behaviors through trial and error, you can train fish to jump through hoops, through giving them rewards just fine.

They can remember this over a year. A year later, they will remember how to do all these tricks. Why do the neocortex evolve in early mammals? And so, if you look at the suite of comparative psychology studies, meaning what abilities do we see in non mammalian vertebrates, and what abilities do we see in most mammals? There are three that stand out. One is vicarious. This is what we talked about in the beginning. One is vicarious trial and error. This is the ability not just to learn through trying and failing at things, but to pause and consider options before acting. And this is a huge thing in AI that we’re trying to figure out how to do well, which is also called planning.

So, a rat, David Radish, has done some of my favorite studies on this if you record hippocampal place cells in a rat, which get activated based on its location in a maze, and you watch and you train a rat to go to different sides of a maze to get different rewards, when a rat pauses at a choice point and sniffs and looks back and forth, you can literally watch its hippocampal play cells playing out each option before acting. So we can literally go into a brain of a rat and see it deciding between choices. When we go into fish brain, there are place like cells in the homologous organ region of the hippocampus, the medial pallium. But they never encode future states. They only encode the current state that they’re in. We do not see them playing out options.

[00:35:02] Paul: Poor fish.

[00:35:07] Max: So, I guess, as an aside, there could be studies that are revealed in the next ten years that show fish do this. So we’re sort of operating on somewhat sparse evidence. So it would be fascinating if someone did discover fish brains able to do this. But as of now, the evidence doesn’t suggest fish are able to do so. That’s one ability that rats clearly have. The other is counterfactual learning.

So David Reddish also did some amazing studies, something called restaurant row, where he had a rat go around these four corners, where each time it passed one of these corridors, a sound would occur that said whether or not, if they went to the right to try and get food, it would be released immediately, in, like, 3 seconds, or they would have to wait 45 seconds. And it became clear that some of these rats preferred certain food over others. Some were like tastelesseless pellets. Some were like cherries, which rats loved. So, going around this maze or this restaurant row presented rats with irreversible choices at each moment. I either can choose to go get this food, or I can go to the next one and hope that it’s released quickly, not in 45 seconds. And what he found is when rats passed up the opportunity to get a really quick meal. So it said 3 seconds to get a banana, and it said, no, I’m going to try and get the cherry, which I like more. And then the sound for 45 seconds was alerted. It showed all the signs of regretting their choice. It paused. It reactivated in their orbital frontal cortex, the representation of the foregone choice. And the next time around, it changed its behavior to be more likely to actually wait, because it didn’t want to make the same mistake again. So that shows signs of counterfactual learning.

Another one of my favorite studies of this in monkeys is chimpanzees. You can teach chimpanzees to play rock paper scissors, and in rock, paper scissors, what happens when you see an ape play rock paper scissors? If it plays paper against you playing scissors, meaning it lost the next move, it becomes biased towards playing rock. Now, this doesn’t make sense in standard reinforcement learning. In standard reinforcement learning, if I lost playing paper, I would be equally more likely to play rock and scissors next time.

[00:37:25] Paul: By standard, you mean model free, correct?

[00:37:28] Max: Correct. Yes.

So there’s two main types of reinforcement learning. One is called model free reinforcement learning, which is what I argue evolved in early vertebrates, where you don’t play out futures, you’re just acting in direct response to stimuli. Model based reinforcement learning is the idea that you have a model of the world that’s rich enough, that enables you to imagine taking actions and sufficiently accurately predict the consequences such that you can make choices using your own imagination. So with model free reinforcement learning, you would not expect an animal to be more likely to choose rock next. What you would expect is for them to be equally likely to choose rock or scissors, the two things that didn’t lose in the last move.

But chimpanzees don’t do that. They’re more likely to choose the move that would have won in the prior game, which only makes sense if they’re actually able to imagine the move that would have won, and then they become more biased to do that. So that’s also really strong evidence for model based reinforcement learning in chimpanzees.

So that’s counterfactual learning. And then the last one is episodic memory, which there’s a lot of controversy around that word. When I say episodic memory, I don’t mean the presence of an episodic self, which is a concept in psychology, meaning the ability to, or an autobiographical self, the ability to imagine you in the future and past. I think most evidence would suggest that only emerged in primates, but I do mean the ability to render a state of the world in the past. Sometimes people call that episodic, like memory, but that at a bare minimum does exist, I think, in non primate mammals. For example, my favorite studies of this. So there’s two types of studies for episodic memory. One, I think is not very convincing, which is this, what, where when memory. So can an animal remember the location of something when it happened and what was there?

Coming from an AI background, I could conceive of how you could do something like that with model free reinforcement learning. It’s not very convincing. That that requires simulation. But there’s another category of tests that people do which I do find convincing. Episodic memory, where they do the following. This has been done on rats, where you present a rat with a maze, meaning you pick them up and you put them in a maze randomly throughout the day. And where food is in that maze depends on whether or not they had had food in the prior three minutes of them being presented with the maze. But the maze is equally paired with both sides, places where the reward is. So you can’t model free. Just say, when I’m presented with this maze, I always go to the right and get food. It depends on what happened just prior, which you would think requires them to pause and imagine what just happened and then make a choice. And what’s even more convincing is rats can readily do this, but they stop being able to do this if you inhibit their hippocampus. So if you inhibit the hippocampus, which should prevent them from being able to rerender these past events, then they no longer are able to remember what just happened. So I think this is pretty convincing evidence that what they’re doing is imagining something that just happened and using that to inform their future behavior.

[00:40:48] Paul: So those three things kind of encompass the simulating breakthrough, which was number three, to fast forward, because presumably, you were interested. So I know that you have an interest in the neocortex, and I don’t know if that’s where this all started for you, because you do a lot of, like, modeling and theoretical work in terms of what the neocortex can do, sequencing and learning, et cetera. And so, presumably, you were, like many of us, enthralled with how awesome humans are. And so we’re kind of stepping through, and your book kind of steps through toward how awesome humans are.

And the fourth breakthrough was mentalizing. So we’re sort of stepping our way toward how awesome humans are, I suppose.

[00:41:36] Max: Yeah.

So what’s interesting is, in the transition from mammals to primates, we see not a lot of brain changes.

The brain scaled a lot. So early primate brains were surely much larger than early mammal brains. But if you compare the brain of a chimpanzee to that of a rat, there are not many differences. I mean, the fundamental interconnections and fundamental brain structures are really all there.

The only two really different things is the presence of what’s called granular prefrontal cortex. It’s called granular because it has granule cells in layer four. So most neocortical columns or regions of neocortex is a six layered structure. However, the frontal cortex of early mammal brains only has five layers. It’s missing layer four, which also exists in the homologous regions of the human brain. A bunch of really interesting things about why that might be, which we can get into.

[00:42:35] Paul: We should.

[00:42:36] Max: But in human brains and in primate brains, there’s also a region of frontal cortex that is granular, that has layer four, and that’s called granular prefrontal cortex, which encompasses many, many different subregions, but that is unique to primates. And then there are certain regions of the posterior cortex, sensory cortex, called the superior temporal sulcus, and the temporal parietal junction, other than these sort of two areas. So, like the new areas of frontal cortex, new areas of sensory cortex, there isn’t much different about primate brains. So the question is, what do these two new regions actually do? And the history of this is interesting, because early on, we struggle to understand what even granular prefrontal cortex does, because a human can have really gross damage to this region of their brain and still function relatively normally. It’s not obvious when you meet someone who has damage to the granular prefrontal cortex, whereas it is immediately, if you have damage to your ag, granular prefrontal cortex, you become relatively unconscious, or they suffer from mutism, meaning they just won’t even speak. If you have damage to non primate areas of sensory cortex, like visual cortex, you’ll become blind. So it’s immediately obvious when someone has damaged to these regions, but it’s much more nuanced when you see damage to these primate areas.

But the more we look into them, a reasonable explanation of what these things do, especially with the connectivity, is it renders a model of the older mammalian model. So granular prefrontal cortex does not get direct sensory input. A granular prefrontal cortex gets direct input from the amygdala and the hippocampus. It gets all of this input about the emotional state of the animal. Granular prefrontal cortex only gets input from a granular prefrontal cortex. So you can kind of, as a first approximation, look at it as a second order model of the first order model that emerged in mammals. And in this way, there is a sort of layered.

The broad idea of MacLean, of brain layering, I think, does, in some ways, apply. And what I think is interesting is, if you think about this conceptually, what does modeling your own simulation mean? So if you think about the original mammalian structures, which is either simulating the current state, which is what could be called perception by inference. So, modeling what I’m currently seeing or simulating meaning, modeling what’s not currently. This is all broadly thinking. This is what we mean by thinking. So, granular prefrontal cortex is kind of modeling an animal’s own thinking, which in psychology, people use the term metacognition, thinking about thinking.

And what is interesting, this idea of modeling your own simulation.

One is a good explanation of the sort of inabilities that emerge from a granular prefrontal cortex damage, so that we can see big challenges in theory of mind. So, people with damaged granular prefrontal cortex really struggle to think about other people’s current mental state and reason and take the perspective of others. The other really interesting thing is there’s a great study that showed what, when you ask people to imagine future state of the world, and you compare someone with hippocampal damage, a control, and someone with granular prefrontal damage, you see fascinating differences. The person with hippocampal damage struggles to imagine a rich future scene of the world. So they will not describe things like the color of the trees or the rich detail of the sky, but they will readily describe themselves so they can happily describe their own personality traits. The person with granular prefrontal damage, but no hippocampal damage describes a future absent of many of the rich details of the external world, or sorry, containing many of the details of the external world, but lacks themselves. They can’t imagine their own sort of autobiographical self in this future state, which is another more evidence that our own model of self, of our own inner thinking and identity, comes from these primate regions.

And so, okay, if we now take my little framework of comparing this concept to what new behavioral abilities do we see in primates? Again, there’s another sort of strong three, which I think one can make. A strong argument emerges from this idea of mentalizing, modeling your own self. One is theory of mind. So there’s lots of good evidence that primates can imagine other people’s perspective, which, of course, it makes sense. What do we do when we imagine someone’s perspective? We put ourselves in their shoes. We take a model of ourself, and then we change the context and see what would we do if we were in their situation. It’s exactly what we would expect from sort of self modeling seems to exist in primates. Second is really strong imitation learning. Primates are very good at learning motor skills through observation.

And at first, it’s not obvious why mentalizing should support that. But taking the lens of AI, it actually is quite clear one of the problems imitation learning in the world of AI is people are constantly making micro adjustments. And if you just do raw imitation learning, you learn the wrong things. So we’ve tried this with autonomous vehicles, where we’ve trained AI systems to directly copy drivers, and this doesn’t actually work very well, because what you don’t realize when you’re driving, you’re constantly making these micro adjustments. And so what happens? The AI system learns to do these micro adjustments in the wrong ways. And so Andrew Ng actually came up with this really clever idea called inverse reinforcement learning.

[00:48:18] Paul: Was that his idea?

[00:48:20] Max: Yeah, Andrew Ng was one of those. And Peter Abel came up with this idea.

[00:48:24] Paul: Okay. I always struggle with where ideas originate.

[00:48:28] Max: Well, it’s possible, I guess they did some of the most famous work on it. It’s definitely possible. The core idea was someone.

[00:48:35] Paul: Sorry to interrupt.

[00:48:36] Max: No.

So inverse reinforcement learning is the idea of, well, what if first, when you watch someone do really complicated motor behaviors, you try to infer their reward function? In other words, understand what they’re actually trying to do, and then you train yourself just to fit the reward function you’ve inferred. So when you watch someone drive, you realize, oh, they’re trying to stay in the lane. So then when you drive, you’re not copying them directly. Now you’re trying to stay in the lane, and then you can reward and punish yourself as to how well you’re adhering to the inferred reward function. And so we have found inverse reinforcement learning to be a very effective mechanism in making imitation learning. AI systems work well. This directly maps very nicely to the idea of mentalizing, because we know primates are very good at looking at someone do something and infer what they’re trying to do. And we know this because when we make a chimpanzee, if we have a puzzle box, this experiment has been replicated many times, a puzzle box, and you have a human experimenter do all of these steps to get inside the puzzle box. Primates, chimpanzee will learn this immediately, but they will skip steps that are obviously irrelevant, so they’re not directly copying you. If you scratch your head in the middle of the experiment or you just tap something, they don’t do those steps. Interestingly, a human will do the irrelevant steps, which is a whole nother thing, but a chimpanzee won’t.

[00:49:55] Paul: Means they’re human child anyway, right?

[00:49:57] Max: Human child will, yes.

Anyway. So being able to reason about the mind of another is clearly useful in imitation learning because it helps you understand what their intent is. And then the last is anticipating future needs. This is the most controversial because there’s not a lot of studies on this, but this is the ability to actually take an action today to satisfy a need you do not currently have. So we human to this all the time. We go grocery shopping when we’re not hungry. And so this is actually a harder ability than just model based reinforcement learning, because I don’t only have to imagine an outcome that then I get excited about now, which would be like me imagining going to my refrigerator and grabbing food, which sounds nice because I’m hungry right now. I need to imagine going to the grocery store now, because tomorrow I will be hungry, even though now I’m not. And so this means we have to be able to infer future need states or future reward signals that we don’t currently have. And so there’s not been a lot of studies on this across animals. There’s been one relatively famous one that is really interesting. People need to try and replicate it, but it’s suggested that primates can do this and other mammals cannot, which is a fascinating finding. So the way they did this study was they had rats and squirrel monkeys choose between two options. One was a high treat option and one was a low treat option. So the high treat option gives lots of food, the low treat option gives very few food, but they intentionally chose treats that induced a lot of thirst. So dates and raisins make both primates and rats very thirsty. And so if you choose the high treat option, you’re going to not get water for a very long period of time. And if you choose the low treat option, you’re going to get water immediately. And they baselined it to ensure that both animals, they chose ratios of food such that induced the equal percentage increase in thirst in both animals, both species. And they found that rats were unable to choose the low treat option in anticipation of their future thirst, but monkeys were.

And so what does anticipating future needs have to do with mentalizing? Well, is imagining a future state of your own mind where you need different things really any different than imagining the state of another human being or another species, and imagining how they might feel in a new situation?

And so Thomas Suttendorf is the one who came up with this idea that future need state anticipation might really be the same underlying algorithm as theory of mind. Imagining the state of someone else that’s disjointed from your own is not really different than imagining a future state of yourself that’s disjointed from your own. So we see all three of these abilities perhaps emerge in primates from this idea of mentalizing, granular, prefrontal cortex, sensory cortex.

[00:52:45] Paul: I want to interrupt you at every sentence, because so many of these topics are fascinating and merit their own book length treatment. Right. And somehow you’ve included it all in your book, maybe a non sequitur, but thinking about mentalizing, having a simulation of a simulation, could you improve the human brain? Like, if we just took what you know now from your. And given your background and, and your engineering sort of perspective?

So what we’re talking about so far is like the story of evolutionarily, how these things have come about, what purposes they’ve served, what functions that they serve.

Where’s the limit? Could we have a simulation of a simulation of a simulation? If we just keep adding on new simulations of simulations, would that improve things why and how? Or can we even talk about the story of the future? Improved brain?

[00:53:48] Max: So, I think what’s interesting is our goal in AI is clearly not to just recreate the human brain. I mean, the human brain has lots of flaws.

It has intellectual flaws and instinctual flaws, and both of these are problematic. So instinctual flaws, I think, are easier to understand. And perhaps what people are more afraid of. In the world of AI, instinctual flaws are things like in the political world that primates evolved in. We became very status seeking, obsessed with our places in hierarchy. Primates are hierarchical species, very gossipy, always trying to cozy up to the people at the top, et cetera. We see this in chimpanzee societies. This is the worst version of humans. Right. Do we want to recreate AI systems that have the same instincts? I hope not.

So I think there’s lots of instinctual things that humans evolved to have that are evolutionary baggage that we probably don’t want. Almost definitely don’t want. In AI systems, there are also intellectual flaws. So as much as there are things human brains have that we want to recreate, in AI systems, there’s also things we don’t want to send along. For example, humans reason about things by simulating, which has benefits but also has downsides. So Daniel Kahneman has something called, I don’t know if he created it, but he talks a lot about it in his work called the representative heuristic. And so, for example, the famous example of this that he came up with is, suppose Kathy is meek and has glasses. Is she more likely to be a librarian or a construction worker? And everyone says librarian, but that fails to include base rates. There are way more construction workers than there are librarians so even if 5% of construction workers are meek and wear glasses, and 95% of librarians are meek and wear glasses, it’s very likely that there’s actually more meek glasses wearing construction workers. But that’s not how we thought about the problem. We imagined a librarian and imagined a construction worker, and compared your description of Kathy to each of these two things. And so that is the way in which mammals tend to reason about things, but it comes with lots of biases and flaws. Another example of this is confirmation bias.

Perception by inference. In other words, trying to render a simulation of the world and then comparing our sensory inputs to that simulation and confirming that what we see is aligned with what we’re imagining, which is how we perceive the world and has lots of benefits, also comes with these other problems, such as confirmation bias. We always start with a prior. We start with an idea of what we think is true, and then we only update it if evidence is sufficiently different to require us to update our priors. But that leads to things like confirmation bias. If I believe the state of the world is a certain way, I’m filtering the world through that lens, which is what you would expect from a bayesian brains.

[00:56:52] Paul: It’s also very efficient, and so there are a lot of benefits for those as well.

[00:56:57] Max: Right, exactly. Very efficient. But it comes with problems. We see lots of issues with confirmation bias.

So these are things where just two examples of things that if we were to try to create a super intelligence that is the best of humanity, I think we would want it to have some forms of bayesian reasoning, reasoning by simulating, but also have other mechanisms to make sure they consider base rates.

They’re more accurate at updating priors if new evidence is contrary to what their expectations were, things like that.

I think the goal of learning about the brain to help AI is not to recreate it wholesale, but to find the special components of human intelligence and gift those to AI systems. For example, I think mentalizing is actually a great one, where it actually begs some interesting philosophical questions.

One of the main problems with AI systems, and Nick Bostrom talks about this a lot, is ensuring they understand what we say. And this is famously talked about as the paperclip problem. He has an allegory where you have a super intelligent AI running a paperclip factory, and we say, hey, can you maximize paperclip production? And then it turns earth into paperclips. And the point of that allegory is the AI wasn’t evil. It just took our requests and satisfied the criteria. Maximize production of paperclips. And it reveals that it’s actually very hard for humans to synthesize their preferences in words. And we don’t notice this at all because our language is empowered by both brains having mentalizing. If you tell me when I’m running a factory, if you’re my boss and you say, hey, please maximize production of paperclip, I’m going to imagine what you actually mean by that. I’m going to mentalize about your mind and your preferences and simulate options. It’s going to be immediately obvious to me if I destroyed earth with paperclips, that’s not what you wanted.

And so gifting AI systems with some mechanism of mentalizing about what we mean by what we say is clearly going to be a very important path towards AI safety. It comes with some other hard problems, though.

First, the way it seems that we mentalize is by having a relatively common set of neural machinery, such that I can infer how you would feel about things, because I’m not so different from you, right. There is a relationship between the way I feel about the world and the way you feel about the world, such that I can put myself in your shoes and do a decent job of predicting what you might want. But if we build AI systems without human preferences, because they’re not humans, how do we ensure that they’re going to be good at simulating what a human would want?

And that’s actually a very hard problem, because we don’t want them to have human preferences.

[00:59:52] Paul: Right.

[00:59:53] Max: We don’t want them to have all of the bad stuff that humans want, but we want them to understand what we mean. And that’s actually a very challenging line to draw.

[01:00:01] Paul: Yeah. Do we understand what we mean?

[01:00:04] Max: No, we don’t. But we can do a better job. We do a better job than the computers in Nick Bostrom’s allegories. Yeah. So I think if you think about the way I would imagine this is the information translated in words between brains. We do a decent job of taking the possible meanings of it and making a smaller bounding box, which is relatively accurate, more accurate than all the possible meanings. But it’s not perfect. We can still clearly misinterpret what people are saying, but we remove a lot of these crazy scenarios that are actually not obvious. To remove, like destroy Earth and create paperclips.

[01:00:44] Paul: Yeah. You used the word perfect there. And something I was thinking while you were discussing this was that there’s just such a normative component to building, quote, intelligence. Right. So first of all, thinking about our biases and what brains do that are good and bad. Those terms, good and bad, there’s a valence, there’s a normative component, and what we want AI to do and whether we want them. Whether we want AI to emulate brains and which functions, which algorithms, et cetera.

So the more that I learn about brains and minds and AI, the less I know, right? So this is a common trope, and I’m wondering if you have felt the same way.

I don’t know what intelligence is. Do you know what intelligence is? So we use the term intelligence willy nilly, right? And there’s a normative component. There’s the perfect intelligence, right. That we want AI to do, that our brains do and don’t do. But do we know what we’re even talking about when we. I’m sorry. I know this question gets asked a lot, like, what is intelligence? But it is a question.

[01:01:58] Max: I think it’s a great question.

You’ll notice that in my book, I don’t actually define intelligence.

[01:02:04] Paul: You do not. I was wondering, because I emailed you, and I was like, I don’t think you defined intelligence in your book, which is great. I mean, that was intentional.

[01:02:12] Max: That was very much intentional. I mean, I deliberated for a while as to whether or not to define it, and the reason I didn’t is because there is actually no really good, rigorous definition of intelligence. What we mean by intelligence is many different things in many different contexts, but all of them are informed by the story of how the brain works and how the brain evolved and how it relates to what we’re building in AI. So it almost is not necessary to rigorously define it when, in all of the possible scenarios of how you define it, the story is useful in the same ways. So it just felt unnecessary, and I’ll describe why.

So, for example, some people think about intelligence as the ability to solve problems, to achieve goals.

Now, that’s a really interesting definition, because then a lot of things, it requires us to imbue a thing with goals, and that is actually really hard to assess. What do we mean by a thing having a goal? In that lens, does the universe have a goal of which is entropy, and is it solving lots of problems to maximize entropy? On what grounds do we say the universe doesn’t have a goal?

And some people say, well, the definition of intelligence has to include some notion of learning, because a pre programmed robot in a factory that we have given a tree of if else statements to solve problems but doesn’t learn is not actually intelligence. It’s a know. This is the human centered AI lab run by Fafe Lee. They have a wonderful definition of intelligence, which is fuzzy, right? They say it’s something along the lines of anything that can learn or use techniques to solve problems or achieve goals in an uncertain world. So it’s a bunch of hedges, right? Because it means multiple things. A convolutional neural network is intelligent because it can identify objects. Doesn’t really achieve a goal. Maybe during learning, it had a goal of minimizing errors, in a sense, but it’s not really seeking a reward function in the world, but in some sense, it’s solving a problem of identifying an object that is different than the training data it saw. So I just think it’s not obvious what we mean by intelligence. And whenever we imbue things with the notion of goals, we’re entering philosophical territory, because it’s actually very hard, if at all possible, to definitively prove the existence of a goal from the outside.

What we mean by a goal, I think most people would say, is not just the optimization of some equation, because if that’s what you’re saying, then the universe has a goal just maximizing entropy. Right?

So, yeah, that’s why I kind of avoided it.

[01:05:11] Paul: No, I actually appreciated it. I only realized it toward the end of the book.

I don’t think he. And then I didn’t go back because I have a paper copy, and I can’t just do control f or command f to try to find it. So I commend you for that, actually. But you also don’t talk about in the book. You don’t talk about why you don’t define it. Yeah, there’s that old phrase that, how do I know what I think until I see what I say? I don’t remember who that’s butchering that quote. And everyone says by writing things down, it helps crystallize how you think about things. But a different way to think about that is when you write something down, it biases you to think about the way. Think about it in the way that you wrote it down. Right. So it kind of serves as a crystallization and biasing. So sometimes I’m afraid to write something down because I want to continue to have multiple ideas instead of commit to an idea.

Where was I going? I think I had a question about this.

Is that what happened with you? You said it was very satisfying, right as you were writing. I mean, is this the formation of your ideas and helps helping to crystallize those ideas? And did you learn what you thought by writing it?

[01:06:34] Max: No, this is where being an outsider, I think, was useful, because I’m not pursuing a career in academia. I’m not trying to be right.

I’m trying to discover what is right, whether it’s my idea or someone else. I mean, most of the ideas in the book, the vast majority of them are others, and I do my absolute best to cite them and give them credit. And I’m just piecing it together.

So I think not having an ulterior sort of career motive is freeing in the sense that I’m very willing to throw away ideas that are clearly not right. And I don’t feel attached to one idea because I’m defining my career on theory x.

There were many places when I was doing a reading of this stuff where I did throw away ideas, and it just got to the point where it became clear to me that it was a useful first approximation that a lot of the evidence aligned to. And to make sure I wasn’t barking up the wrong tree, I submitted a lot of the ideas to multiple peer reviewed journals, made sure they went through peer review, then got the ideas published, then collaborated with a variety of neuroscientists to update them. There were flaws in the original ideas that got tinkered with with new evidence.

No, I don’t feel like I had this idea and then I forced evidence into it. I think it really emerged from the evidence, but it’s not perfect. I mean, I think I can readily cite off the counter evidence that complexifies the model. For example, there is one study that did find latent learning in a fish only done once hasn’t been replicated. But that would be a very interesting counterexample to the idea that fish do not have model based reinforcement learning. That could mean multiple things. It could mean that it independently evolved in that species of fish, or it could mean the basic idea that model based reinforcement learning only exists with the neocortex and mammals is wrong and actually did exist in early fish.

There are open questions on the degree with which primates actually can engage in anticipating future needs. So people have questioned that study I cited on the raisins and water, and people are asking for replications of it. I think there was one attempt to replicate it that didn’t find as strong the results. So it’s possible that that actually is not the case. So, yeah, I think the verdict, of course, is still out, but given the available evidence, I think it’s an instructive first approximation.

[01:09:13] Paul: Yeah, I agree. Again, I’ll say it. I mean, it’s a beautifully written book and just chock full of information, so I highly recommend it to people.

I struggle to form coherent questions around what I’m about to ask you.

So, math is beautiful, algorithms are beautiful.

The idea of a clean function is beautiful, like cortex does x.

Do you still? So the more that I learn about these notions, cognition, et cetera, the less I’m willing to say brain area x does cognitive function y, the cortical column does, generative prediction, et cetera, something that you talk about in the book, because, as you were just saying, all stories become more complicated the more information there is.

Has your journey into learning about these things.

Are you still married to the idea of that? The brain does an algorithm, that there is a function that is consistent across, for example, neocortical columns, everything.

The way that we naturally need to think about things is to simplify them and say, brain area x does function y.

Do you think in those terms?

[01:10:48] Max: So I think, actually, one of the motivations for having an evolutionary story was, I mean, one of my original introductions, I went away from this because it felt too aggressive against the idea of compositionality, meaning that brains have fair functions. The goal of the book isn’t to be a takedown of that idea, but one of the motivations for the evolutionary story is to avoid the mistake of trying to reverse engineering the brain by assigning functions and decomposing it. Because what the question we ask is not, what does the neocortex do? The question is, when you add a neocortex to this soup of complicated stuff, what new abilities emerge? Which is a subtle difference, but an important one, because there’s many ways that adding this thing to the soup of a complex emergent network could lead to the result that, you see, that doesn’t require the function of the neocortex to be thing ABC.

So I find the evolutionary story to be a really useful tool to avoid this idea. The neocortex does this, the basal ganglia does that, et cetera, et cetera.

So I think it’s almost definitely the case that our desire to assign clean functions to things is wrong. And I think the more evidence that emerges continuously shows that, which is one reason why I think one of the ways that AI can really help understanding the brain is, I think my hunch tells me the next two decades or so is going to see a really big explosion in computational neuroscience, where we can start, instead of just trying to model what does the column do? We can actually start trying to do something closer to whole brain modeling and just say, hey, what if we just try and approximate what different structures, individually, algorithmically, are doing? Not assigning a function, just saying algorithmically, it’s doing something akin to x. And we start wiring an artificial whole brain model up, and then we try and simulate the emergent properties of that, which is very different than saying neocortex does this and then basal ganglia does.

No, I guess, long winded way to say, I don’t feel married to the idea of specific functions.

The only thing that I think mechanistically is really useful about approaches like Jeff Hawkins’s approach, which I am really amenable to, is which I would draw a distinction between what Jeff Hawkins is doing and the attempt to assign function to structure. What he’s doing is really trying to assign algorithm to structure. And I think that is a viable approach, because it’s not the same thing to say cognition or object recognition happens in the neocortex, but it’s different to say this specific algorithm is being implemented here, and in partnership with the other algorithm over here in a broad network, leads to these emergent properties and these abilities and these functions. And so I do think looking at the algorithms implemented by structures is a more fruitful approach than trying to assign functions to structures.

[01:13:51] Paul: Got you. This makes me think about the idea of process, which you talk about in the book, in terms of what language gives us. And so your fifth breakthrough is speaking, which is, of course, wrapped up with language. And you use the example of DNA as enabling things. Right? So it’s the process that it enables and not the function itself, for example. And then you make that analogy to language as well. So could you just talk a little bit about. And we have to. Of course, with large language models these days, we have to touch on large language models and what language enables, et cetera. Can you talk a little bit about the fifth breakthrough from that perspective?

[01:14:39] Max: Yeah, I think there’s been so much work on language and the evolution of language, and it’s still very inconclusive. But I think what there is broad agreement around is why language is so powerful as an ability.

And one reason why language is so powerful is it enables interbrain transfer of ideas. So if you think about sources of learning through the evolutionary story, you can almost see the entire breakthrough, five breakthroughs model through the following lens. It has been a progressive expansion of getting new sources of learning for brains. So the very first vertebrates, reinforcement learning, they could learn from their own actual actions. So a source of learning new arbitrary behaviors comes from actually trying things and seeing whether they succeed or fails. With mammals, trial and error. With mammals, there’s a new source of learning in addition to trial and error, not only your own actual actions, but your own imagined actions. So I can imagine doing things, learn from the outcomes, and then make a decision without actually doing it. So that’s a new source of learning with primates, with the ability to understand the intent of others doing things, a new source of learning was other people’s actual actions. So I can watch you use a tool, and I now can learn to use the same tool simply by figuring out what you’re trying to do, and then teach myself in my head to try and do the same thing. And then I imitate these skills. But throughout this entire arc, it has never been possible for another animal to learn from another’s imagined outcomes. In other words, I can’t look into your brain and see the result of your own simulation and then change my behavior on the basis of that. That is what language enables us to do. So language enables me to go on a hunt and come back and inform you of my own episodic memory of the event. I remember going there and the red snake being edible and the green snake being really dangerous. I take that simulation, I can translate that to you. So now that’s part of your mental model of the world. We can plan a hunt by saying, hey, let’s split up. I’m going to go right, you go left, I’m going to chase the animals in your direction, and then you’re going to catch them. That is me engaging in vicarious trial and error, imagining a plan that works and then translating that plan to you and three of our friends. And then we all execute the same plan. So that’s what’s so incredible about language. But what ends up happening as a consequence of that is this process that has been talked about ad nauseam in the humanities, and anthropology, which is culture, which is this ability to have ideas that are passed between brains in a non uniform way. So in other words, ideas. This is Richard Dawkins’memes. Ideas that lead to the successful survival of individual humans are more likely to propagate than ideas that don’t. So this becomes almost a secondary layer of evolution, which is the evolution of ideas. So one form of ideas is tool use. So if we successfully figure out how to make bone needles for sewing, that idea is really useful, and that idea will be propagated through generations. They can start adding to those ideas because they accumulate ideas, start accumulating as they’re passed thoughtfully. Ideas or memes can also include cultural principles, how we choose to treat each other, how we deal with people who violate what we think is right and wrong, that type of stuff.

So what language enables, by transferring ideas and thus enabling ideas to transfer across generations and persist, is it enables this process of idea evolution, or memes, and that is sort of what I call the singularity that already happens, where now all of a sudden there’s this explosion of complexity that’s never happened before.

[01:18:45] Paul: So I don’t know if you want to talk about it, but the book is framed around five breakthroughs, and then you speculate on perhaps a 6th breakthrough. Do you want to talk about that?

[01:18:56] Max: Sure. Yeah.

[01:18:57] Paul: Okay.

Which is maybe you can describe what the. So I was asking you how one would improve the human brain, right? Because if we think of evolution as an arrow, which it’s not, we’re at the peak of that arrow, which we aren’t. But this is all we have to kind of compare to. Right. So it’s hard to predict the future, obviously.

But you do speculate a little bit what the 6th breakthrough could be, and it is the artificial intelligence, essentially us, passing on our intelligence to artificial intelligence. Or maybe you can correct me.

[01:19:38] Max: So I think it’s hard to argue that the next big frontier will not be building artificial intelligence in our image, in a silicon based medium.

[01:19:51] Paul: In our image, yes.

[01:19:53] Max: And I think the reason it will be in our image is because the economic incentive is to. There’s two reasons why it’ll be in our image. One, the economic incentive is for human usefulness. What will humans pay for? And that means it’s going to be wrapped around some notion of human preferences and desires, because we’re going to want them to do things that satisfy human needs, at least in the beginning. So in that sense, they’re going to be imbued with at least some form of what humans want. An example of this is language. If another species were building an AI system and they communicated through non language mechanisms, one of the breakthroughs that they built into the AI system is not going to be language. And yet, right now, most of the AI investment dollars are going into scaling up language models. Language is a very human thing, but the reason we do that is because that’s how humans want to interact with these AI systems.

I think they will be imbued with human.

Yeah, they’ll be imbued with human preferences on, or human likabilities on two grounds. The second ground is that we can’t help but think about things the way we think about things.

[01:21:05] Paul: That’s a tautology.

[01:21:06] Max: That is a tautology, yes. And what that means is when we’re trying to build something that is quote unquote intelligence, when we can’t rigorously define intelligence, which probably means it’s some complex self image that we have, I would add, bastardized. We’re trying to build something that’s just like, vaguely like us.

It’s very likely we’re going to start imbuing our own types of stuff on them. We’re going to want them to smile when we are laughing. It’s going to be weird to us if they’re not engaging with us on some way that we understand.

I think they will have some notion of their creators, which obviously were us, but they’re also going to have lots of abilities that we don’t have.

For example, there’s no reason why these AI systems will have any reason to die.

There’s no reason why these systems will naturally fade the way that a biological brain does.

There’s also no reason why we have to imbue these systems with the evolutionary baggage of us. Primates that seek hierarchy, want to dominate. Others are constantly asking and referencing our own egos compared to other people that are competitive, all these things that are, like the worst versions of us. There’s no reason why an AI system should have any of that, and I hope we don’t imbue systems with that.

I think the big question in the next century, which there’s very big disagreement on from various people in the field, is, what does the future of us plus AI look like? And I think there’s really only three outcomes. One outcome is we subdue AI through regulation or something else, where it never becomes human like, and so it just becomes another tool, like a computer, and humanity persists, and evolution ensues, as biological evolution has through genetic variation.

The other extreme of this, which I think is more likely, fortunately or unfortunately, is that humans are just supplanted with ais. In other words, eventually ais are so humanlike that we have AI like children, and eventually culture will shift towards it being od, to have a biological child that will suffer from disease and will inevitably die and have all these limitations, when you have your AI child, which can live forever and doesn’t suffer from depression and all of these things, and I think whether we like it or not, I’m not saying this is good or bad, but I think it is likely over time, people’s choices might shift away from having biological children of their own volition.

[01:23:53] Paul: I can tell you, as a parent, it’s od currently to have a child that isn’t always tethered to a screen.

[01:23:59] Max: Yes. Right. So we’re already kind of hybrids, which goes to the third option, which is a middle ground, which is what I think is less likely, but a virtuous one to pursue, which is some merging between us and these AI systems, which is what Elon is trying to work with at Neuralink, where we become these sort of cyborg hybrids.

The reason I think that’s unlikely is because I think the technological challenge of building human like AI systems in silicon is astronomically easier than the technological challenge of how do you translate the biological signals in a human brain to an AI system? Like so, for example, a very crude way to think about this is if we wanted to actually have a download of human thoughts that was fully accurate, we would need to record every neuron in your brain in real time. That is impossible.

Or it’s technologically so infeasible right now that people can’t even conceive of how you would do that. I mean, you can’t put a wire on every neuron in your brain and record its action potentials.

You could try and just do cortical neurons. But even as we’ve talked about here, I think it’s unlikely that the cortex is doing enough of everything to actually understand what you’re doing without also recording the thalamus and the basal ganglia and all these other things.

Hopefully I’m wrong, but I see lots of good ideas that make me confident that we’re going to have human like AI systems in the near future, near future being in the next decade. And I have not seen any actual ideas on how we’re possibly going to take a human brain and reliably merge it with an AI brain in a way that actually lets me do things like, hey, I want to, in the matrix, download the ability to drive a helicopter, or, I want to learn French, push a button and know French.

I don’t see any good ideas that I think make me confident we’ll be able to do stuff like that.

[01:26:04] Paul: Well, thank you for putting the results of your learning into the book, and thanks for my copy of it. So I appreciate the conversation with me today. And so thanks for spending time with me, and thanks for writing the book.

[01:26:17] Max: My pleasure. Thanks for having me.

[01:26:34] Paul: I alone produce brain inspired. If you value this podcast, consider supporting it through Patreon to access full versions of all the episodes and to join our discord community. Or if you want to learn more about the intersection of neuroscience and AI. Consider signing up for my online course neuroai the quest to explain intelligence. Go to Braininspired Co. To learn more. To get in touch with me, email Paul at Braininspired Co. You’re hearing music by the new year. Find them at the new year. Net. Thank you. Thank you for your support. See you next time.