Brain Inspired
Brain Inspired
BI 124 Peter Robin Hiesinger: The Self-Assembling Brain
/

Support the show to get full episodes and join the Discord community.

Robin and I discuss many of the ideas in his book The Self-Assembling Brain: How Neural Networks Grow Smarter. The premise is that our DNA encodes an algorithmic growth process that unfolds information via time and energy, resulting in a connected neural network (our brains!) imbued with vast amounts of information from the “start”. This contrasts with modern deep learning networks, which start with minimal initial information in their connectivity, and instead rely almost solely on learning to gain their function. Robin suggests we won’t be able to create anything with close to human-like intelligence unless we build in an algorithmic growth process and an evolutionary selection process to create artificial networks.

Transcript

Robin    00:00:03    There’s only one way to do it. You need to let it grow. You need to lead an amount of information unfold that you then need. If you wanted to describe that end point bit by bit would be quite a lot. But if you would only want to describe the information needed to grow, it would be very little, but you can’t predict from the little information what the end point would look like without actually growing it. And the genome is always just there. It’s like a book that’s always there and you always just need to decide what to read in that book and to, to access that book. It’s just enormously complicated. You can’t just open page 255. You literally need a very strange combination of say 30 different proteins that are super unlikely to ever exist at the same time in the cell. But if they do,  

Speaker 2    00:00:58    This is brain inspired.  

Paul    00:01:11    That was the voice of Peter Robyn. He’s a singer who recently authored the book, the self-assembling brain, how neural networks grow smarter. Hi everyone. I’m Paul. And today I talk with Robin about a handful of topics in the book. Robin is a neurobiologist or a neuro geneticist, more specifically at free university of Berlin studying among other things, how DNA and the developmental process in codes, the wiring of brains in the fruit fly Drosophila. The central theme of his book is that current artificial intelligence and perhaps a current neuroscience theories are leaving out an essential part of what makes us intelligent. And that’s the growth and development of our brains neural networks. And in the case of deep learning and artificial neural network, Robin suggests we don’t yet appreciate how information which begins at a relatively low level in coded in our DNA, unfolds and increases with time and energy.  

Paul    00:02:10    As our networks are formed something, Robin calls, algorithmic growth and his claim is that it’s essential to include a growth and an evolutionary selection process to get anywhere close to building something with human, like intelligence, uh, if that’s even what we want. So we talk about that central theme and many of the issues that arise out of it. And as you’ll hear, the book is also a history lesson about the parallel yet independent paths of AI and developmental neurobiology show notes are at brain inspired.co/podcast/ 124. Thanks to all my patron supporters, if you like this sort of thing, and you want access to, uh, all the full episodes and to join our brand inspired discord group, uh, you can do that on the website also@brandinspired.co all enjoy Robin Robin. I enjoyed the book immensely. So we’re going to talk all about it. One of the things that, uh, I was surprised about when I read it, given that I had seen one of your lectures was the amount.  

Paul    00:03:13    So of course you dive into the history of AI because the book is all about how AI might be missing something, right. But what I was surprised about and thoroughly enjoyed as well was the history of, uh, developmental neurobiology and how you weaved that history into, uh, the history with, uh, of AI and how they happened in parallel. And of course, you know, hadn’t really spoken to each other, uh, for many years. So, uh, and you did it in a format that was sort of a storytelling format, so it was very easy to digest. So, uh, congratulations, and thanks for writing the book.  

Robin    00:03:48    Thank you so much for, for having so much good things to say about it. Yeah, it is a unusual angle and it’s because I’m coming from kind of left field, if you will. Right. I’m a neurobiologist. I, um, you know, I dabbled a bit in formatics and I dabbled a bit in philosophy as a student, but, uh, I’m running a research lab, um, teaching undergraduate and graduate students, and we’re publishing papers on how genes and coat the information to wire a brain. And, uh, so the, the, the origin of this somewhat unusual approach and of the parallel storytelling of a field of neurobiology and artificial intelligence really kinda originates with me being, I guess, somewhat unhappy with my own field in biology because, oh, you know, we started many years ago in the very successful molecular revolution to publish more and more papers on individual genes, their products and their roles in how the brain develops and how it functions.  

Robin    00:04:55    And it’s led to a data explosion. And, you know, my very good friend and colleague, uh, Bassam Hassan says, uh, you know, we’re, we’re increasing the amount of knowledge at an unbelievable pace, but, uh, or an increase of understanding is not keeping up. And so, so I was wondering whether we’re missing something. I was wondering whether we need some kind of theory, whether we need some kind of information theoretical background, what does it even mean that the genes and code the brain? I mean, that’s obviously a very loaded, strange question. And then of course you can’t avoid in this day and age being bombarded by the news about some kind of allegedly superintelligence systems, you know, seemingly overtaking us next week. So what do they know that that’s when I started to feel like, okay, maybe these people know something that I don’t and I realized, and that’s where the historical part comes in, that the history of AI is unbelievably fascinating.  

Robin    00:05:58    My goodness, these people have been at each other from very different fields for so many decades. And for most of those decades, they really wanted to have nothing to do with the brain or neurobiology at all. And all the successful approaches that we hear about today, like unbelievable success of AI that we have right now is actually all based on what is called artificial neural networks. And they’re closer to the brain than any approach in AI has ever been in history. So that gotta be interesting, right? I mean, if they suddenly take an idea that we have been studying as biologists for so many years and they can do with it, things that we can’t do, and then we feel we can also do some things that they can’t do. Maybe we should talk to each other. So this is how it started.  

Paul    00:06:46    Ah, so I was imagining that as a neurobiologist, you saw the deep learning revolution, the neural networks, and essentially you said, whoa, they have a lot to learn from me, but in fact it was kind of the other way around when you first, uh, poked your nose into it, thinking, what do they know that, that, I don’t know.  

Robin    00:07:04    Absolutely. I mean, the, the, the, the idea that, you know, oh, they should be listening to us, um, dangerously close to ignorance, any kind of approach, right? Because you need to first understand what the other field is doing, and it’s a very rich field. And so this is exactly why I had to go into the history. I mean, when I started, I just want to say, okay, what are they doing right now? What is interesting about this? And I felt like, okay, deep learning is really cool. Recent approaches in reinforcement learning make perfect sense to me. It even has an evolutionary angle. This is even more biological than anything, right? I mean, babies clearly learn with some kind of reinforcement learning. So that makes some sense. So that was all very good. But then you start to wonder, why are they doing certain things? Not at all that, you know, I’m actually spending my life study.  

Robin    00:07:57    And the thing that came to the forefront at that moment in my thinking of course, was wait, I’m in neural geneticists. I am studying how genes and cold or growth process to make a neural network. And that thing’s actually pretty smart before you start learning anything. I mean, we can talk about examples later on from the insect road to, we know when a baby is born, it’s not one network that starts to learn exactly what all pretty much, all deep learning approaches to today. There’s no genome, there’s no growth process that turned on to learn. And that made me then wonder, okay, why is it that they’re doing one thing that we know from biology, but not the other, why is it that they initially didn’t even want to use neural networks to start with? And when I started to understand their history, I had like so many moments of yeah.  

Robin    00:08:54    You know, these kinds of problems I’ve seen before we had that in biology. We had a time when people felt like the human brain is so amazing and our thinking and learning, you know, it got to the genome, you know, beginning it wasn’t of course even not known what the genome looks like. Um, it had to be learned, right? And so this debate on, on learning on nature versus nurture on how much can actually get into a network via growth process. And to what extent you would even need, it has been there in the neurosciences as well, um, some time ago, um, but in one form or other, it’s still there. And so this is very interesting, right? So to see them, the parallels between those fields to see how, um, from my perspective, to some extent, the AI community is actually retracing. Some steps that historically neuroscientists have taken made me wonder that, um, you know, there is something again that we need to talk about. We should, and I wondered how much we are. And it turns out there was very little crosstalk. And so this is how the book originated.  

Paul    00:10:03    So I want to talk just for a moment more about the book itself before we get into the ideas that are in the book, because, uh, I’m curious for one thing that was a very rich and detailed history that you told of the developmental process, was that a lot of work or is that, um, was that easy to piece together? Because I, I know the AI history, or I knew the AI history quite well because it’s been told a few times, but, uh, and, and I even learned more about it through your writing, but what I didn’t know about was, um, that rich history in, um, developmental neurobiology, did you just know all that, or was that a lot of research that you had to do to, to write that  

Robin    00:10:42    It’s both, I mean, it came first of course, right? I mean, it is my field. Um, but you actually touching quite an exposed nerve. I mean, if we go into, into, if you open textbooks, there’s a lot of knowledge that is being communicated without communicating the field’s history. And there are some debates that just appear ludicrous today, right? I mean, the, the, the, the whole debate, you know, this is how the whole problem, how many moment, many of my lectures and the book start is that people couldn’t even agree whether in neuron as a physiological unit, because nobody could understand how he could have so many of those things wiring together. Right? And so this seems, of course, it’s a completely settled debate. Of course, we know that the brain is made up of neurons and their physiological units, and somehow they need to connect up to make something that works and appear smart to us.  

Robin    00:11:28    Um, but this, you can only understand if you actually have the historical context. And of course I knew some historical context, um, but probably not more than any other, somewhat trained developmental biologists. And when I started worrying about my own field, this actually happened before any deep diving into, into AI history. And, you know, how, w where are we at right now? Why are we just collecting gene after gene and molecular function after molecular function and publishing more papers every year than ever in the history of developmental neurobiology before? And are we actually getting closer to understanding how that thing is put together? So when I started worrying about this and wondering about this that’s of course, when I, when I initially went back in the history of where this came from, and there’s a very cool story to be told that’s of course, in the book of, of an interesting break point in history where a very famous neuroscientist, Roger Sperry went ahead and said, you know what?  

Robin    00:12:31    The embryonic development will make a brain out of individual neurons live with it, and they gotta be molecular interactions. That item middly define the development and the growth of that network. And that was at a time in the thirties, forties and fifties, predominantly when most scientists, including the famous, even more famous supervisors of, of, of this, uh, gentlemen, um, were on the other side of the spectrum and said, uh, you know, it gotta be all somehow induced by some kind of plastic form of learning. And, you know, it was a field of psychology brain wiring was at that time, uh, the subject of psychology and not the subject of, you know, today, like molecular geneticists, they didn’t even exist really at that time. And so, so this is where it all started and, uh, it’s been good and bad. Um, of course we know that, you know, there are a lot, some say an infinite number of instances of individual molecular functions in different contexts and specific cells and specific animals at specific developmental stages.  

Robin    00:13:44    I mean, you know, there’s no limit to the depth of this that you can study and publish papers on, but how the genome actually does that, you know, famously the genome contains one gigabyte of data, and they have people in the AI field like Schmidhuber and others who actually use it as an argument and saying, look, you know, one gigabyte of data that can encode anything. So, you know, we don’t even need to look there. Clearly we only need learning. We don’t need the genome, but then again, of course, what we all doing, maybe not looking at the whole forest at a time, but always like looking at some individual leaf or needle of searches, one tree insight that forest is looking at how the thing unfolds. And we do know that an apple seed will grow an apple tree. This is the job of developmental biology and the development of the brain is no different. And so, uh, this is basically where we’re coming from, and this is where the history became so important and has a huge impact on what we do today.  

Paul    00:14:51    So, so I will read from a quote from your book, I guess, to start us off here, because, uh, just dovetailing off of what you were just mentioning this then is the core question of our workshop. What difference does it make for the intelligence of a neural network, whether connectivity is grown in an algorithmic process that is programmed by evolution versus only learned based on a designed network with initially random connectivity. So you were just talking about people like Schmidt, Huber, um, and not to single him out because it’s the entire AI community, essentially that it’s, um, after reading your book, it’s, it’s, it’s just a curiosity that, uh, we all I’ll include myself, right? Because, you know, at first pass it’s like you have this brain, you’re the network. And then all you need to do is learn from, from there. And that’s what intelligence is about. Uh, but your book makes the argument or asks the question, uh, that may be the growth process itself from the genetic code, uh, is an important part of that process. And of course there are debates in, um, the deep learning field, uh, how, how important, you know, inductive biases are and the architectures that you use and whether that matters. Uh, but, but I w I would guess that you would say that that’s, that’s not enough.  

Robin    00:16:07    Yeah, exactly. That’s exactly what I would say, but I would also say it’s a step in the right direction, you know, ideas that like convolutional neural networks that are basically mimicking, you know, a little bit of the wiring of a visual cortex in mammals, um, or, you know, the new proposal that just made somewhat of a splash by Jeff Hawkins in his thousands brains, um, is of course that, you know, we got to design these things like the cortex, and then you have like all these cortical columns, and then they can copy it and vote and all these things, right. And these are all very good ideas. Um, but they’re basically coming like tiny, tiny steps back from like the purely random network. And of course they’re still designed. So in biology, how you got to the cortical columns in the first place, it was of course through growth process.  

Robin    00:16:57    And the fact that there are cortical columns contains information. It’s not just, you know, randomly connected network that we have in our brain. Otherwise most of my neuroscience colleagues would be out of a job very quickly. They’re studying circuitry, we’re studying how the neurons are exactly put together. And it’s fabulous. I mean, you can study things like, like, uh, motion detection. Um, so there’s a beautiful example by understanding how exactly different types of neurons or certain delays and certain, you know, conduction velocities and, and synaptic strengths, and then some state dependencies where you have some neuromodulators that whole populations of neurons suddenly have a lower threshold, all kinds of stuff. And when you see how all of this is put together, suddenly you understand, you can even build a computational model based on that. And it tells you, yeah, you know, this network can see motion, but this is very different from teaching a completely random connected network motion detection, right?  

Robin    00:17:59    So the, to teach the randomly connected network will not lead you to all the both cellular connectivity. That is very specific that I just try to just, you know, like outlines describe, or the molecular aspects, which are, um, things like, I, I even mentioned something like a neuromodulator, um, these are molecules that are diffusing in whole areas of the brain and changing synaptic weights. And this is simply not included in the modeling off synaptic weight changes in an artificial neural network. So evolution has found solutions to problems that the brain can solve, like motion detection, like you name anything up to cognitive abilities in the human brain or down in insects. You know, my favorite example that I’ve seen is the Monarch butterfly that flies these 3000 lives. You’re crazy about butterflies. I really love those. They do so much. They’re like half a gram, and they can do so many things.  

Robin    00:19:06    If you wanted to train a neural network that is randomly connected to all the things that butterfly can do to achieve butterfly intelligence. I can tell you, we are far away from that, but, you know, without straight, this is the, this is the, this is the idea, right? So clearly learning from biology has become more accepted than ever in the history of AI before. Um, Jeff Hawkins famously was unhappy early on as a, as a young person, before becoming a billionaire that he wants to learn from the brain. And, you know, there’s a whole history, and this is why I tell that history, right in the book in AI research saying, you know, we don’t need all this messy wet stuff. And, you know, all these idiosyncratic solutions that evolution may have found a way we can design something from scratch. That’s better. And now we’re at a time where we kind of, you know, dipping in.  

Robin    00:20:00    I mean, when I say we actually talk about the AI community, deep dipping into ideas from the brain, like, you know, maybe we need critical columns, but it’s really just the tip of an iceberg because the brain is not just simply quite a can columns. It has all the molecular beauty that defines how individual neurons communicate with each other. And there is so much information, not just in the chronic activity of a specifically wired network, but also in the molecular composition that you cannot, if you want to simulate the way the human brain works or butterfly, BrainWorks just reduced to synaptic way to change. And so this is where I’m coming from. Well, I, I don’t know how much I agree with you that it’s more acceptable  

Paul    00:20:47    Now to include biological detail because there’s, and you document this in the book as well. There is, um, a constant drive in the AI community to abstract out as much as possible, right? So the idea that they, that you would need to include growth, and we’ll talk more about the details there in a moment, the idea that you would need to include growth from the gene must be horrendously, uh, unattractive to someone in the AI community. Who’s just trying to get their neural network to learn something, right?  

Robin    00:21:17    Yes. It’s very unattractive. Actually, we came, we did the experiment together with colleagues, just how unattractive it is. Um, if you just want to learn a specific task. And this brings us to a very interesting topic, right? I mean, artificial neural networks, um, based on reinforcement learning have been at this point more successful in almost any individual task a human could do that. I could imagine. I mean, obviously they can play better chess. They can do better visual image recognition. They can do, you know, better solutions to the cocktail party, problem, auditory, prison, whatever, right? So individual things, you can train these things to become better. If you now want to train the same networks, something else, um, you quickly have the problem of catastrophic forgetting. Um, and the AI community is trying to address that with, with deep learning approaches. And there are some good proposals.  

Robin    00:22:12    Now, the, the bottleneck in teaching in network, any of these things is still the training, obviously because the design is largely random, right? I mean, you may have some pre connectivity like in a convolutional or recurrent neural network, but really the key bottleneck is learning. So reinforcement learning takes a lot of time and energy to get that thing to be really good. Once it has learned it, you can just, you know, um, uh, deploy it and then it can decide whether it still should learn anything or not. And, uh, you know, but once it has learned something, it just can do, right. It can recognize images. It can make predictions about, you know, who should be your spouse or your next soap. Now, if you want to do this same job for a specific task with a neural network that is not trained by learning, but that is trained by a genome that produces the network.  

Robin    00:23:12    And then once you grow that network, you have it evolutionary selection for one that works back to the genome and iterations of this process, then you will soon find out that the time and energy that takes is even enormously bigger. So let me, maybe it’s really worth saying one more word about this, the AI that we know today, everything in any of these big Silicon valley companies that we’re all familiar with, all the stuff that we’re scared of or impressed with, it’s all, um, trained networks that have been trained in one or the other form, big data or reinforcement learning, deep learning stuff. Right. Um, but there is an AI community that actually does a very different type of learning off artificial neural networks. And it works like this. You take a genome, the genome defines, you know, in the easiest case directly synaptic weights.  

Robin    00:24:10    It’s not how biology works, but that’s the easiest approach you can take. You can say, I have one gene person NEPs, if you will, right? And then you, you can basically fill the synaptic weights of a recover matrix of, uh, of a recurrent neural network. Um, and then you can have that thing, do something, for example, let a little agent find a path through amaze or do image recognition, anything you want. And then if it does it, did it, well, then you take more of the genome that was at the base of this. And if it didn’t do it well, you just mutate more of the genome at the base of it, and then you do it again and then you do it again. And then you do it again, just like backpropagation is an iterative process where you train and train and train and classic, deep learning and reinforcement learning is actually very similar to the process.  

Robin    00:24:56    I just told you, right? Because you only learn from the end state of the system. So can you actually train the network, not by learning, but by keeping on randomly mutating, that genome that feeds the synaptic weight matrix, but that’s still without a developmental process. And if you now add a developmental process, this thing gets very quickly, even for the biggest computers on earth, out of hand, you’d have a genome that say, you know, few hundred genes is not even being done a few dozens. And then you basically feed that to, into some gene regulatory network that may have to go through a few hundred iterations of a developmental process that leads to the numbers that you fill in the way to matrix for the recurrent neural net. Right? Then you finally have a network that can do a task, perform something, see whether it’s good, and if it’s good, it’s good.  

Robin    00:25:48    You keep more of the genome. And if it was bad, you mutate more of it. Imagine this iterative process. So it’s a huge computational effort is orders of magnitude more computational power needed to even simulate a laughingly simple version of a genome and developmental process in an iterative evolutionary selection process. And the outcome is at this point in time, never better than the deep learning stuff. So therefore nobody’s doing it right. That’s not quite true, right? I mean, there are some academic scientists doing it, but then the question becomes, you know, where are we in this artificial neural networks based on deep learning. I mean, before there was deep learning, right before we had like this humongous amounts of data, um, and, and faster computers also were not successful. That’s when symbol processing logic had its heyday in AI. So maybe today we’re at a time when computers are still not fast enough, maybe we need quantum computers or something to actually simulate the evolution of the growth process and neural networks.  

Robin    00:26:58    So not deep learning, but, but, but, um, neuro evil devil learning, if you will, of artificial neural networks, and then they suddenly will become powerful. That’s one thought and just the last thought, and then you need to stop me because otherwise I keep on talking. It’s also a big question of what we’re trying to achieve, arguably, to achieve a single task. I don’t see why this enormous effort that I’m talking about here right now would be better if you just need face recognition is, you know, the deep learning is amazing. And I don’t see how this, this, this like orders of magnitude more computational effort evolution of the neural network would do that one task better. But this of course brings us to the question of, you know, artificial general intelligence and where we’re really going with this. Maybe you need that more, much more effort process. If you want to go beyond single task artificial intelligence.  

Paul    00:27:55    That was a lot that you just talked about, maybe where we could start, uh, is the concept of, uh, unfolding information and algorithmic growth, right? So you had mentioned that there’s not enough what you alluded to, that there’s not enough information contained in the DNA, in our DNA to encode all the connections, uh, synaptic connections in our brain, uh, it’s orders of magnitude, less than you would need to encode the entire structure of our brain, but that through the process of, uh, genes becoming proteins and transcription factors, uh, and, and through the developmental process, through which you call algorithmic growth, uh, that information unfolds and that essentially encodes the, uh, program that results in our connected brain. So could you talk about the concept of unfolding information and algorithmic growth?  

Robin    00:28:50    So the, the, the, the words are the best. I could find unfolding information and algorithmic growth, but they’re just putting a label, um, uh, on, on something that we clearly observed. Uh, we know that if we look at an apple seed, we can get all the information out that’s in there, it’s in the sequence of the DNA. You know, maybe some lipids around it are important. We need to know some physical laws and, you know, based on experience that you’ve seen it before, that if you put that seed in the ground, that would be an apple tree one day, you know what it will be, look what it will look like. So in the apple seed, there is no way to read that it will be an apple tree. Um, the only reason why we know that this is what happens because we’ve seen it happen before. If you would know, then they see the episode will not reveal the secret. You can only compare it to other sequences you have and what they did in the past, but that’s the same sleigh of hand, right? That’s the same cheating based on previous outcomes, even if  

Paul    00:29:57    All the DNA and, and know everything about the contents of the Appleseed. Yeah.  

Robin    00:30:01    Alright. So this is what I’m saying, and this is controversial, correct. Clearly there are more optimistic, um, molecular geneticist than me that feel like one day, if we just know enough, you know, we just will understand just how this happens. And so, um, this is what brought me to try to find out whether there is any more solid way, any mathematical way. Um, is there something in science that tells us whether or not something like this can be unpredictable, that you have a simple code, right? Let’s just call it one gigabyte of genetic information. Simple. I mean, it’s not amazing, but just for, you know, in relation to describe, let’s put it like this, describing the DNA sequence in our genome, or in an episode compared to describing every single neural connection in your brain, or even the branching pattern on an apple tree is like, you know, it’s like comparing almost nothing to an enormous amount of information, right?  

Robin    00:31:02    So clearly something happens. See if clearly during growth, you know, there’s, there’s more in your brain, then you can read in a sperm and an egg. And so I try to find examples. So I talked to a mathematicians and they looked in other fields again. And so I came across something that I’ve known for a long time, but it’s not been obvious to me where the connection lies. And I guess it’s a connection that, that still requires some explaining. There are examples of very simple rules, very simple codes that can lead to a lot of what we like to call complexity. And the example that I used in, in the book, uh, our cellular automata, right, Steven evolved from made them quite famous with this book and you kind of signs. And he showed that there are types of one dimensional cellular automata. I’m not going to go into details.  

Robin    00:31:51    There’s super simple roots. It can do it on math paper. It’s like, you know, the deterministic, they always produce the same thing. They’re boring in many ways, but he could actually show that with a super simple rule set that I can write down in one line. You know, if this, then this, if this, then this, if this, then this stuff, um, and it’s just black and white squares, if you keep on applying the same rule again and again, and again, you’ll grow a pattern that never repeats. And that will literally grow with infinite time to infinite complexity. And for one of those rules, there is actually a mathematical proof by a coworker of Singapore from, from the nineties already, that shows that it is undecidable, which is, which is math speak for unpredictable. So this is actually what is funny, funny to you, but I think the answer is this is what is called the universal Turing machine. If it can contain in its pattern, every single computation that you could possibly do in math. And this proof shows that it shows that a very, very simple rule set can produce if a complexity, it’s the smallest known the simplest known universal Turing machine in science today,  

Paul    00:33:06    This is rule one 10, right?  

Robin    00:33:08    This is rule one 10, the cellular automaton,  

Paul    00:33:11    Which by the way, I think would be a pretty good band name,  

Robin    00:33:16    I guess. So, yeah, one 10. This is a, is a number, you know, that’s the emergency number in Germany also.  

Robin    00:33:26    Yeah. So, um, sorry. Uh, and it is, you know, it’s produces infinite complexity and we know that you can’t predict what comes out of it. So what that means, you know, this was, I think I’m making it too complicated yet. It’s very simple. You can literally do this on paper and everybody, you know, in grade one, you could already do this, like draw a line after line after line, and you will find that the pattern never repeats. And it’s beautiful. Um, and you may ask, well, could I have predicted, is there any kind of math that would allow me from just knowing the code, what the beautiful pattern in the end would look like, and now you see the analogy, right? This is what biologists, and we all would like to know, is there any math that would allow us from an apple seed or a humans permanent act to predict without any previous knowledge of outcomes, what comes out of it, what brain wiring have to have.  

Robin    00:34:20    And so for this super simple mathematical concept for this rule, 1 10, 7 automaton, we know they can not be any way, any analytical math to calculate what say row number 1000 looks like there’s only one way to do it. You need to let it grow. You need to lead an amount of information unfold that you then need, if you want it to describe that end point bit by bit would be quite a lot. But if you would only want to describe the information needed to grow, it would be very little, but you can’t predict from the little information what the end point would look like without actually growing it. So this is why I talk about unfolding information, and this is why I call it algorithmic growth, which is, you know, just a, a simple description of what we’re seeing here, right? It’s algorithmic because you use the rule set again and again, and you grow something and there’s no shortcut to that.  

Paul    00:35:19    But so in the example of the cellular automaton, this is a very simple system, right? And the idea in your book is that DNA doesn’t encode the end point, but it encodes the algorithmic, the algorithm, uh, to grow things and DNA, and the developmental process are way more complex than a cellular automata. Um, and one of the daunting things is that let’s say we could go a lot of different directions, but one of the daunting things is let’s say, you know, your DNA encodes a protein that protein has a w what were you searched for is the function of that protein, right? But through the, uh, time and energy in the algorithmic growth process, the quote-unquote function of that protein varies depending on different contexts and different stages of development. And of course, then you have all of these things interacting. So somehow the algorithm is encoded in the DNA and development takes care of the rest.  

Robin    00:36:19    Yeah. Isn’t it beautiful? It’s, it’s a problem. It’s just, it’s so supremely non-intuitive right, right. How we know it happens. I mean, there’s, there’s no, you know, there’s no magic. We know that you have a seed, you have, uh, you have an egg and sperm. Um, and we know that given enough time and nurture, you know, something beautiful will develop and developmental biologists have just been studying how that happens, right. Instances of this snapshots of this,  

Paul    00:36:50    But in principle, a one cannot look at the DNA code and infer the algorithm. Right. So, yeah,  

Robin    00:36:58    I actually don’t know that. Right. So the reason why I like this simple rule one 10, and you’re right. It’s of course, much simpler. That was ridiculously much simpler to the extent that it really is not a good model for brain development at all. It’s just an example that shows that tiny amount of information, even deterministically and the same rule applied again. And again, I mean, the simplest possible thing, if you put in enough time and energy can produce something of literally infinite complexity that contains everything, any possible computation. So my argument is kind of that simple thing already can produce infinite complexity. Then we should definitely, at least not be surprised that something that’s so much more complicated, like a genome with a so much more complicated and prolonged and protracted developmental process. Like, you know, what happens for nine months in the womb.  

Robin    00:37:50    And then for many years, thereafter can lead to quite remarkable what we like to call complexity in brain wiring. So what’s so Supreme really non-intuitive about this is where does information come from? If the stuff in the genome is so little and what I need, the information I need to describe the network, connectivity is so much where does it come from? And this is really the core of understanding the algorithmic growth process and the time and energy it takes. So there’s a lot of beautiful discussions we could have now. And the physicists who are listening to this will know of course, a lot about this, right? I mean, the fact that you can describe entropy in many different ways and find it, um, and you know, you can describe the information content of, uh, of, uh, heat exchange between my room and the outside and so forth.  

Robin    00:38:51    Um, the time and energy you put in puts in information. So this is not easy to explain, but this is of course will be no happens, right? So it’s not like I’m saying something outrageous. We know that, you know, there’s a seat, there’s an apple tree, all you needed in the meantime, it’s time, energy and water and sunshine. So just information theoretically, what that means for brain wiring and how much information there is in a actual wetware biological neural network, I think should not be underappreciated. The amount of information that has grown into that thing while you were nine months in the womb and by your, you know, growing up as a, as a toddler and later teenager, uh, with all the characteristics that, um, you have at these developmental stages that we immediately recognize, you know, nobody mistakes, the teenager for a toddler behaviorally should not be under appreciated and neither should be learning.  

Robin    00:39:59    But, you know, the, this is basically the trying to find the sources of information. And then coming back to your original question of our statement about pragmatic approaches in AI, that of course must try to shortcut this, right? If you want to just make a deep learning neural network that recognizes know that helps your business grow, um, you know, you want it to work. And so you’re not going to go through all these processes and you don’t need to, but then the question is, can that thing ever be what our brains are? And, um, that’s where I think is no shortcut.  

Paul    00:40:44    So you mentioned learning there and, and connectivity. So I’ll ask about learning first. You, you consider, so, so the modern AI begins with learning essentially. I mean, there’s some, like you said, there’s, uh, some dealings with architecture and, uh, how many units to use, et cetera. But then the almost sole focus in deep learning is the learning process. But do I have this right? That you see learning as a continuum of the algorithmic growth process? Or do you see it as a, you know, is it separable? It’s not like, so what I’m guessing is that you see, um, uh, our continued learning throughout life. And, um, as we develop, these are not separable processes from the, uh, algorithm algorithmic growth process. It’s all one big  

Robin    00:41:33    Process. That’s a question. Sorry. Yeah, it’s a, it’s a, it’s a, it’s a wonderful question. And it’s a big distinction between what we know about biological brains and artificial neural networks, artificial neural networks, however close. You want to make the design to the brain. They still have an on switch. And then there is this break to talk about, and then learning starts. So there’s a design period. And then there’s a learning period. Biological brains do not really have that. Um, that is of course, a period when you do not yet have learning. That’s when the neurons are in a state where they not yet excitable cells, they are not yet connected early in embryonic development. You have like all kinds of other developmental process that have to start making that network. So you could say there’s a break in the sense that there can’t be learning yet while the connectivity develops.  

Robin    00:42:23    However, the moment the neurons start making connections, things start happening. Part of the developmental process of every neuron is that it becomes an excitable cell. They will start to spontaneously excite each other. We now know that a large part of very early brain development are activity waves that sweep through parts of the brain, both in an insect as well as in the human brain. And these activity waves are the brain learning from itself already prior to even input. So part of the purely genetic program, no learning edit no environmental information edit part of the purely genetic program is already that the neural network starts talking to itself. And that’s part of what changes its connectivity, even before you’re born. And even before there’s any input, the moment there is input, you’re still inside the genetic program in the sense that the way evolution selected for the developmental, the genome that encodes that developmental program is that there is a time when for development to conclude properly.  

Robin    00:43:36    Certain input is absolutely needed. I mean, they’re horrific experiments. They don’t even want to tell you about to do, to deprive a human of a certain input afterwards. Um, and then certain things will never develop. And, and there are critical periods as they’re called in biology, that if then the input doesn’t happen as part of the algorithmic genetically encoded growth process at that right time, if you don’t, you know, if you’re not talking to, if you don’t get visual input, if you don’t get auditory or factory input, certain things, um, can never even be made, um, recovered. And they become part of the growth process. So the genetically encoded growth process continuously partakes and accompanies the learning process. The moment the network is a connected entity, and this includes activity before there’s any environmental input before learning from any environmental outside information before any nurture, if you will. But it also includes that nurture has to be right then and right there as environmental input, um, as part of that period of the growth process, uh, and their continuous our entire life in one way or the other,  

Paul    00:45:01    It seems like such a delicate process that, you know, there needs to be certain things happening at certain times within or else the algorithm, um, doesn’t function properly. Right. And yet, uh, we also seem to be quite robust organisms. How do we reconcile those two things? Because th I mean, at first pass, doesn’t it seem like, well, you know, anything goes wrong at the wrong time, in the wrong place and the wrong environment. Uh, it could go haywire and yet, um, we are surviving thriving organisms.  

Robin    00:45:33    Yeah, it can actually, I mean, um, you know, the, of course only see the winners walking here, right? I mean, all those experiments that evolution continuously, um, makes that don’t make it just simply not there. Uh, the question of robustness is a beautiful question that, uh, that neurobiologists, uh, both developmental as well as functional neuroscientists are struggling with. And there, there are certain features that we have learned that are key to the robustness of this whole program. And I think the most important feature is the idea of autonomous agents. The idea that an individual neuron actually knows nothing about the brain and individual neuron has its own algorithmic growth processes, kind of, you know, it grows an axon like the cable that needs to connect somewhere else. And, you know, just when it needs a partner to make synaptic connections with the it department, it happens to be exactly there, right?  

Robin    00:46:34    This is how the beautiful ballet of, um, uh, development unfolds. But if the partner were not there, the neuron would still run its program. And, you know, for example, in this particular example, by enlarge in Europe would just make actually isn’t empty contact with somebody else. So it would not be quite right, but it would probably be much better than nothing. And you can early in embryonic development, you can do crazy experiments. I mean, so we work with flies in the lab, right? I mean, so in flies, when you develop part of the visual system, you can early in development, easy, go in and just kill half the cells. And then when you look at the final outcome, everything is perfectly finding it. So it just turns out that all the remaining cells during the growth process, they kind of just did what they normally do, but because there were no others that they would normally have had to compete with, which would have led to some of them dying.  

Robin    00:47:30    And some of them surviving, they kind of also survive and they just fill the space. So this is very robust and it’s robust because it is not a blueprint directed maker with some factory and robots that assembles it. But it is a self-assembly process of lots of individual autonomous entities and each individual neuron like every other cell in your body has the capacity to encounter different environments, to encounter surprises when things go wrong during development. And when things happen during normal function, that is of course unpredictable in an unpredictable environment and deal with it. And that’s one of the key ideas that we know is important for robustness. And it’s a very interesting concept because self-organization is therefore absolute key to the wetware neural network function of any brain, um, as it is for its development. But self-organization is kind of, you know, Matt kind of implicit in, in, in artificial neural network research, but it’s slightly avoided. So, you know, you could argue that gradient descent and, you know, parts of how backpropagation work and how neurons communicate has features of self-organization. And I think it does, but it’s not a major topic in the design and training of neural networks. When you look into the literature of the field,  

Paul    00:49:13    I want to back up here, a lot of the book is dedicated. You mentioned the, the axon growth process. A lot of the book is dedicated to describing, uh, both the history and the current science and the controversies of, of how axons reach out and make the quote unquote correct connections. Although you were just mentioning that there isn’t necessarily a correct connection from the start, because these are autonomous agents essentially, and they self-organize and end up and do okay. Before we talk about that process, thinking about the code of the DNA. So the book is all about the, uh, eventual connectivity of the brain, right. Brain wiring. But a lot of what is in the DNA is all must also be dedicated to metabolism, right? Because we have to survive in a far from there. My dynamic equilibrium state do you see is metabolism and the metabolic products, right, uh, that are coexisting with the connectivity products. And that algorithm is, is the, is the metabolic code separable, uh, from the connectivity code within the algorithm. And this is, I’m asking you to speculate, unless you have a definitive  

Robin    00:50:27    Answer. I dare say, I have a definite answer. I’d say it’s not separate. And it’s not separable. This is a big discussion also in the larger, now a life community rather than AI community about embodiment, right? The idea of, you know, as a simulation enough and how much do you need to simulate? You know, do I need to simulate the metabolic, you know, things happening at individual soon EPCIS or is it enough to just have, you know, a value for us in optic weight, this kind of thing. And if you talk about cells acting as autonomous agents, they start to have, of course their own drives and they need their own, you know, at least minimally simulated metabolism. But, um, yeah, we, we, you know, this, this, this, this is at the heart of self-organizing versus a designed, uh, entity, but you’re asking about genes.  

Robin    00:51:22    And so we look at genes, of course, people have been trying to find like individual genes that just tell us something about brain wiring, right. Is there like a gene for this one synaptic connection that’s of course nonsense, because we have, depending on how you want to count, let’s just say 20,000 genes that humans, and like some ridiculous number of sentences in the brain. So how does this work? And so then people were trying to find, is there, like, you know, maybe it’s like surface proteins that sit on one cell and another, and then they recognize each other and these proteins exist, but then you find that the same surface proteins, you know, they also functioning in the liver or, you know, somewhere where like, uh, you know, blood vessels grow and need to branch and do things. And, and you try to find, you know, okay, this metabolic stuff, is this just, you know, it was metabolism only like kidney, heart and liver, or how about the brain?  

Robin    00:52:17    And then you find like, yeah, of course, you know, most metabolic enzymes are actually expressed as particularly high levels in the brain. And it’s the Oregon that requires the most energy. And so basically the bottom line is that, you know, there are hard, a few hard specific genes, and there are a few brain specific genes, but by enlarge, this whole question is not very good. The idea of hoping to find, you know, a gene that specifically tells me how the brain wires, there are very few genes in the human genome that will not be turned on and off that will not be read out in one way or another and one cell or the other during brain development. And it’s all part of the algorithmic growth process. If solution didn’t care about our intuition of say two molecules that recognize each other and could be a key and block forest and empty connectivity, even though, you know, we would love to write, we love to read papers like this as developmental neuroscientists.  

Robin    00:53:19    Look, I found another cell surface protein that’s exactly on that cell. And the lock to that is another protein on the other cell. And you know, both the genes of the genome, and this is how this one synaptic connectivity is wired, but, but evolution, I dare say really tried out any kind of mutation in regulatory sequences and coding sequences of any kind of gene. And as the growth process unfolds, you will find that the mutation in some ubiquitous that’s, you know, biology speak for expressed everywhere, uh, leaks, pressed, metabolic enzyme, a mutation in that thing that let’s say changes just, you know, it increases 5% of the function that’ll turn out to be completely irrelevant for your heart and your kidney, whether it ends up as it. So, but during brain development, there may be just this one, neuron that if you increase this particular metabolic enzyme, 5% at the exact time, when it is making a snap to connection, it changes the speed say because of the increased metabolic rate of the time, windows are short and say the time window of when it can make a certain connection and it will lead to less connection of one type and more connections of another.  

Robin    00:54:39    And that was a mild change to a metabolic enzyme. That’s everywhere in your body. And the only change it may cause the outcome completely unpredictable, but, you know, evolution tries these things out, um, was slightly different way had brain. So this is why you need all of it. You need, you know, you can’t just take a synoptic weight as a number, but the information and code it, ultimately in all this synaptic connections and the way to get there required evolutionary process selection of something that worked that was not predictable, like rule one 10, but that, you know, if you had enough millions of years to try out evolution on earth, um, and, uh, evolution of brains, you can figure out adaptations and changes based on mutations in many different genes, and there need be nothing intuitive about it for what scientists would like to see.  

Paul    00:55:44    So modern, uh, deep learning, right? It’s it begins with a network of connected artificial units. And, you know, like we’ve been talking about through a long process, the synaptic weights, uh, become, get adjusted through learning, but it starts as a neural network. And, uh, a large part of what you describe in the book is the developmental process that leads to a network, which is a bunch of neurons connected. Um, and you give multiple examples throughout the book, uh, of different connection patterns that can happen depending on the sequence of, uh, of the development that happens. But, and yet, uh, the focus is still on sort of the end, um, connectivity of the brain that you end up with a network of neurons with connections between them, our neurons, the end all though, uh, do we need to consider things beyond neurons like astrocytes and Gloria, uh, is that part of the whole algorithmic growth process that, that may eventually be important for building better AI and understanding how this all works or, or do we really just need to focus on the network of neurons?  

Robin    00:56:52    So again, we need to come back to what we want, what is it that we’re trying to achieve to make an algorithm to predict what to buy? I don’t see why you would need astrocytes if you want to have, uh, any kind of resemblance to what you may want to call the human AI, I’m hesitating because almost I’ve tried to use the word general artificial, you know, there are so many terms, they’re all undefined, right? I mean, artificial general intelligence, nobody who knows what the hell that’s supposed to be. And that has a lot to do with there being no definition for intelligence. And you know, the idea then that you want to have a human intelligence. It’s very different to me from human level. I don’t even know what level is supposed to mean, right? That can measure a level of playing chess, but I can not measure level on being, you know, Paul know how Polish is my intelligence and I think many different, um, people will be more similar or less similar to your individual type of intelligence. And that requires your entire brain.  

Paul    00:58:01    One of the, uh, seemingly important features of brains is the feedback loops that occur within the, if we can just stick with neural networks, uh, the brain is a highly, highly, highly recurrent neural network. And AI is taking this on deep learning is taking this on and, and, you know, people are using recurrence. Um, but my, my question is thinking back to the algorithmic growth process and a self-organizing system, and, and actually in the book, you talk a lot about levels. So going to the DNA level, then, uh, it is interesting. And I’m, I’m going to ask you if there’s a deeper principle involved here. Uh, it’s interesting that an enormous amount of DNA is devoted to, uh, feedback in the form of regulatory proteins that feed back onto the DNA and regulate what’s being encoded by the DNA transcribed. I should say, harking back to my own molecular biology days, trying to remember the words. Um, and then through the developmental process, it’s, it seems to be that, uh, these feedback mechanisms are also, uh, you know, uh, like a majority of the processes. So is this a deeper principle that, um, within algorithmic growth to have a robust system, um, that you, that feedback is the main thing?  

Robin    00:59:18    I think, so it is a very important, I mean, this has been recognized, you know, he can go back to the old cybernetics days, right? This is Norbert Wiener and others who first formulated and quantitatively worked on ideas of feedback and how they, uh, determined self-organizing systems. Um, very specifically to, to the example you give, it goes back to, to, you know, everybody’s biology 1 0 1, right? The, the genome doesn’t change by and large, um, from, you know, once the, the, the sperm has met the egg, you’re kind of done. I mean, that’s your genome and every single cell in your body has it. And it’s the same genome and every single one of your cells. And of course it, you know, what, what makes the cell in your, in your, in your eye, different from one in your heart is that different parts of the genome have been read out during a developmental process.  

Robin    01:00:21    And that process, as we already discussed a little bit earlier, never stops. Um, we know in the brain, when it comes to learning that a to form long-term memory requires the feedback, going back all the way to the genome, new transcription of the mediator of what will become proteins, the famous RNA, and then making the proteins. And then those proteins get incorporated into whatever molecular function you need to change the physiology of the cell, many of the proteins that get this way, read out from the genome are proteins that themselves have this funky property of binding back to the genome, which then leads to yet another different type of protein being expressed. And, you know, maybe it’s not one, maybe it’s a different thousands again. And then one of those thousands is again, one that binds back to the DNA and changes it. So both the internal program that keeps on running changes the cell continuously in a feedback process and that’s part and parcel of any development or growth process, but also the environmental input.  

Robin    01:01:32    Once you have a neural network will feed back to that genome. And the genome is always just there. It’s like a book that’s always there and you always just need to decide what to read in that book and to access that book, it’s just enormously complicated. You can’t just open page 255. You literally need a very strange combination of say 30 different proteins that are super unlikely to ever exist at the same time in the cell. But if they do then in particular gene combination, you know, these 231 genes or something will be transcribed and you will have a new state of the cell. And this is of course what happens all the time and none of this, I mean, very few cells. There are few cells that actually are very silent in our bodies, but most cells in our bodies never, ever stopped that feedback process.  

Robin    01:02:27    Right? If you have an injury in your skin, goes all the way back to changing what will be transcribed, um, cells in your heart and of course, cells in your brain. So the idea of a feedback to the genetic information and what will unfold as a next step, this is basically what all of biology and all of biomedical research is about. We’re continuously studying when and how, what kind of genes get expressed. And this is such an enormous field. You know, even with this only laughable one gigabyte of base pairs in the genome, that’s not much information if you want to write it down, easy one gigabyte, but the information that can just in one or two iterations of any of the 20,000 chips that are being expressed change, what will be expressed next is just a Combinator Loreal explosion that gets out of hand, like almost immediately, and kind of ensures that researchers will never be out of having something new to study.  

Robin    01:03:36    I mean, it makes sense that AI researchers would want to avoid all of that mess. Yes. Which is why you can’t design it. So the only way you can deal with that mess is you have to give up control over the design. You have to program it literally by making random mutations and hope for the best. And if they’re not good, then what comes out of it will be just not as good. And the outcome and evolution didn’t know, but if solution would just select against it, and if what comes out of it is better than, you know, you keep some of these new randomly tried out mutations and you program something better because remember rule one 10, you know, if there’s a proposal, but take it for, for, for, for what it’s worth now, as a, as a, as a hint, that it may just be just as unpredictable, how the genome and coats, you know, what comes out of a given genome without having prior outcome information.  

Robin    01:04:44    There’s no other way to program this. And, you know, if you do this, then it would be nice to at least keep it, you know, not hyper multidimensional, right? To at least have only a few genes and a few interactions. And, you know, then it’s still a lot of computational effort to simulate that, to basically do this experiment. And for every single slightest random change to see what comes out of it, to do this entire effort of, you know, nine months in the womb. And then, you know, all those crazy teenage years where you just don’t know what you want. And then finally sitting here in a podcast, that’s not what you would like to do if you have a pragmatic job to do in programming, in neural network  

Paul    01:05:32    And the book, I guess, I guess you get a little philosophical, uh, about generality and specificity talking about the growth and development process and how, uh, different proteins are used in different contexts at different times in different environments. And what we want to do as humans to understand what’s happening is we want to have a very general principle, right? But then it’s really difficult to say, well, you know what, I’ll just read what you write in the book. This is a bit of a conundrum we can choose between fewer general mechanisms that explain more instances, less or many less general mechanisms that explain fewer instances better, where we draw the line for generality is anybody’s guess. And this is a recurring theme of, you know, you also talk about levels and what’s the right level to explain a given system. And, and so on. Uh, I don’t know, could you, could you just comment on that balance between generality and specificity and that conundrum?  

Robin    01:06:34    So in, in, in, in our field of developmental neurobiology, um, you know, it’s funny how papers are written, right? I mean, of course everybody is studying like a super specific system. I mean, you know, we study flies and some other people study mouse in a specific neuron at a specific developmental stage where it makes a certain choice and you want us to tell molecularly it does that. And so of course, every single one of these scientific publications about a process like this then has to say, you know, we are looking at this super specific thing, but then the hope is of course always, yeah. He know, looking at the super specific thing, but really it’s very general, right. I think we found something in our specific instance of the problem that tells us something, how this generally works. And so, you know, everybody does that everybody then writes like, you know, this gives rise to potentially general principle operating wiring.  

Robin    01:07:34    And the classic ones are of course the attractive or repulsive molecular interactions. And they’re clearly part of how neurons interact and how brains are wired. You know, these are, these are words that scientists use, but of course generality is as you know, the, the sentence you just read, I hope kind of suggests they’re not black and white. I mean, what does it mean? Like, you know, this is a super general principle. I mean, I can give you the super general principle that everything that happens is read from DNA into RNA and then into a protein, and then the proteins interact and zoom, you gotta break. That’s a really general principle and everybody had agree. I mean, you know, this is what happens, the feedback stuff that we’ve been talking about, that’s a general principle general principle that you have continuous feedback off the proteins, which are the products of the genome onto the genome itself to change what next will be read out from the genome general principle.  

Robin    01:08:32    But of course, if I phrase it as general as that, it tells me very little about how this one year on medicine, haptic contact with other neurons. So there, I need to become a bit more specific. And then the question is, you know, how specific do I have to be about every single molecule that is there right then, and, you know, at the right place at the right time to put this thing together, to understand this instance of the problem. And then still, I want to say, yeah, but this is now a general principle. So it’s just, uh, you know, I’m not sure it’s very philosophical. It’s, it’s, it’s just an observation that, that, that we have when we’re trying to understand any system. Right. And you can always make a very general statement. That’s almost certainly true, but really they’re not very helpful anymore.  

Robin    01:09:21    Or you can make a very, very specific statement. That’s really helpful to understand that specific thing. But then of course, it’s not going to apply anymore to every other, you know, it’s, my dad thinks that every neuron has every, every development of a synaptic connection has in common. And then there’s things that must be different in the end. The idea that you need all of it, that you do need every single one of those molecules that have to interact that unfold specifically at some point differently, every single synapse in the brain, such that every single one is in some way, different from any other is irreducible. If you want to have that thing in the end, the brain. And so the, th th th th th the selling point of, yeah, but we’re looking only at generalities, um, is helpful to a certain extent, but at some point we just have to appreciate that it’s really an arbitrary choice, how deep we look at any given. So naps and at any given neuron,  

Paul    01:10:38    The end of the book, you argue that. So you talk about whole brain emulation, and you also talk about the brain AI interface that is oncoming with current companies trying to do this, like Neuralink, et cetera. But you argue that whole to emulate a whole brain. You actually need to do it from start from the molecular level. You need all of those details, right. Um, how strongly do you believe this? And have you received a ton of backlash on this?  

Robin    01:11:08    I, how strongly do I believe? I mean, the data just shows. I mean, this is, I try to avoid, I try to believe anything really. Um, but I’m very happy to be proven wrong. The biological systems that we study are just that like that, right? If you take out any component at any point, you have surprising, uh, implications. And as we discussed earlier, this is exactly how evolution programs are brains. So if you take any component out, if you simplified in any way, you can still get something that’s amazing, but it’s not going to be that thing anymore. So if you want human intelligence, you know, the argument is you need every single one of those molecules. You can’t take any of that away. That doesn’t mean that you can’t produce something more intelligent than a human in many other ways, that’s just not going to be a human intelligence.  

Robin    01:11:59    So, um, in terms of backlash, um, the idea is of course not very well received by the more pragmatic, um, camp of, uh, uh, deep neural network developers. Um, but they don’t mind really all that much because, you know, they know that what they’re doing is amazing. They know that what they’re doing is successful in what they’re trying to do. And, you know, it becomes a little bit philosophical again, then to talk about where the whole journey is going. So we’re, um, so, um, I don’t think there’s, so I’m not actually clashing with any of these amazing people who are developing, um, neural networks. I think we’re all equally impressed with those where I am genuinely clashing is with the prediction of where therefore, inevitably current technology of neural networks has to go. The notion that clearly the next thing that happens is even more intelligent thing.  

Robin    01:12:58    And the more intelligent thing will have this, this enigma, Matic property of being able to produce a, yet more intelligent entity. And therefore then we have superintelligence and runaway intelligence, and we have the famous omega point and we have singularity, and then they’re all going to take over. So yeah, you know, I disagree. We are, we are not just far from this, we’re just not even anywhere near the right path to anything like that. And, uh, the argument has a lot to do with what we understand about what it means to make an intelligent system, the key argument, um, I would be against this whole singularity debate if this is where our trend is going right now, which I’m not an expert at all. Right. But I mean, I’m coming from another site. And so I just, I’m just raising a voice, a critical voice here.  

Robin    01:13:48    And is that, um, there’s no example anywhere in the known universe of an intelligence system producing the system that is more intelligent than it, our entire discussion that we just had circled around the idea that evolution is the only thing that can program a thing like our brain precisely because of the unpredictability of rule one 10 and what the outcome is, and therefore you need the entire unfolding thing. So the, the notion that just because something is even more intelligent than us, it will automatically inevitably have that enigmatic ability to produce a, yet more intelligent thing. I don’t see why at all. I mean, why are we not able to make something more intelligent than us then we’re not. I mean, there’s no example of that in history. Um, that’s the first major criticism and the second major criticism, if you read a book like superintelligence, um,  

Paul    01:14:46    I don’t know that I recommend reading that book. It was a rough one. Yeah.  

Robin    01:14:51    Interesting. Um, I agree with you. I mean, but, um, you know, it has some, it tickles the, the census and the excitement about the future as it has always done. Right. I mean, there is a title, absolutely unproductive branches of so-called science that have been dealing with this ever since Vernor Vinge and this whole, you know, superintelligence thing came up many decades ago. Um, they haven’t produced anything. They’ve just been talking a lot and making a lot of money. Um, but you know, one of the things that, for example, Bostrom desperately needs and just glances over in like three sentences in order to copy your brain is just the brain scanner, right. You just scan your brain and you get the information out, and then you can take that information and make another one. Well, we have just been discussing, uh, you and me the entire time, and we don’t even know what information we need to what molecular level in order to have a brain that actually functions. So, so, so what is it that you are supposedly scanning in theory, at least, which is practically absolutely pipe dream, absolute pipe dream, but in theory, at least you could do like a, like a, you know, molecular scanner that takes every single molecule and makes a copy of that. And then you have that other, but that’s of course not what they mean. They, they, they all mean that clearly it will be, it’s gotta be enough to have something like say, you know, states of synopsis and some digital representation there off that’s a pipe dream.  

Paul    01:16:25    So I I’m taking it. You’re not a fan of the idea that one could slowly, let’s say, let’s, let’s go with neurons, right. Replace our neurons with, uh, mechanical neurons, um, very slowly. And then we would have a functioning brain that has half, or, you know, however many, however much neurons make up our brain, right. That, uh, that wouldn’t be functioning well, that we’re not anywhere near the, uh, sophisticated level that it would take to create an artificial neuron that would, that would, um, be able to function holistically.  

Robin    01:17:01    I don’t want to be as critical as that quiet. I’m actually very optimistic and very impressed with current machine brain interfaces. As you know, we can now do extra cellular, like, you know, little electrodes and thousands of them into say so much of sensory cortex. And that’ll allow somebody who has lost the ability to move an arm or a leg to move a prosthetic device. And this kind of technology is super, I mean, it’s fantastic and it’s going to improve and it’s going to get better. And it makes perfect sense for companies like, you know, there’s the brain gate initiative. There is neuro link that, of course, you know, cause it’s Elon Musk made more headlines, but it’s not really all that new even, no, it’s not, you didn’t roll your eyes there. I was expecting an eye roll. I often, no, no, no. I mean, I’m, I’m, I’m actually trying to be positive here and I’m really, I love these approaches.  

Robin    01:17:59    I think it’s awesome that they’re doing this. And of course, what it’s, what it’s designed for right. Is, um, if we just leave the, the, the, the, the, the science fiction out is at the moment, tetraplegia, like patients who really can’t move their arms or legs, and, you know, they’re locked in syndrome patients and so forth. If you can have any bionic device that communicates with the brain that helps these patients bring it on. Yeah. And if you’re blind and you can put a little camera in front of your eye and connect it to the visual cortex, and you can, you know, you’re not going to see as you and I are, but you see it as something again, it’s fantastic. And there’s no reason to not assume that this is gonna become, you know, exponentially better in the next years. I’m sure it will.  

Robin    01:18:50    It’s going to be wonderful. And it’s gonna come with its own challenges and there’s going to be some critiques and, you know, will other Borks know, finally coming, and there’s going to be some concerns and there’s going to be some failures, and it’s going to be some successes and you know, how any technological advance goes and it’s all good. It’s all, it’s all part of it. So this is going to happen and it’s going to be great, but this is not what we had been discussing. And this is not what Bostrom talks about when he has a brain scanner to take a, make a copy of your brain. We’re talking about, you know, this is, can you get, do you even know what kind of information you would want to get out of a brain to make that brain or to make any brain, you know, what, what makes Paul Paul, you know, can you just copy that?  

Robin    01:19:35    And then another person can be Paul. Um, this is a whole other level. The amount of information we’re talking about here is something that we really just don’t even understand. And if you just want to bring it on a very technical basis, right? I mean, when we’re talking about a few thousand electrodes and all these things do, it’s super impressive, but you know, these are extracellular electrodes somewhere in a brain region where no one knows what the hell is neurons are doing individually, where individual neurons can die and everything is, and you just get field potentials as it’s called from like these areas of the brain. This is not the same thing as neurons that are intricately wired via all the molecular glory of synaptic connectivity inside an actual brain, their Valiant approaches and efforts. And they’re there they’re to be applauded, and they’re going to be big successes, but they’re not a path to making a human brain.  

Paul    01:20:40    So Robin, there are a ton of other topics that we could go into from the book. Um, I’m aware of the time though, one of the, one of the cases that you make in the book is, uh, uh, a case for the utility of cognitive bias, right? So I’m not going to ask you about that, but what I’m going to ask you about, um, thinking about cognitive bias is during the process of writing the book and, and thinking about these things, and, you know, in the beginning, we were talking about what led you down this path, has your view of intelligence changed, uh, through this process, or is your cognitive bias, uh, stuck where you had the same view of what intelligence is and what it means, et cetera, uh, from, since before you were writing the book,  

Robin    01:21:28    You know, that the embarrassingly safest answer is that I can’t possibly know because I’m sure I’m somewhere stuck. Right. And I’m, you know, caught in my own brain and my own biases and my own growth history. Right? So I, I come to the topic of biases through the growth history, right. Which is, which is, um, uh, serendipitous and idiosyncratically individual and makes you, you and Mimi. And of course, new information coming into a brain is not compared on an even footing with the information that is already there. I mean, we know this, um, and that has a lot to do with what, you know, how you have used your brain in the past. Um, let’s just not go into all the examples that leap to mind right now about, uh, about, you know, data unbiased, um, beliefs that people, uh, have right now, this is how the brain works.  

Robin    01:22:22    So, you know, um, this is a dangerous area to be in, you know, they actually, I mean, this is not my field, but they’re, they’re, they’re, they’re, they’re amazing experiments in psychology out there that actually show that people who are trying to not think about these things and people who are not trying to worry too much about themselves, actually more successful, you know, to be ignorant to your own. Uh, you know, you can look up these things like the embarrassing, uh, questionnaires. So it’s a very difficult and dangerous area to go in, right to the, you can go and aspire down in self doubt very easily. Um, if you questioned data-based every single part of your growth history based a brain that happens to function based on that growth history. So, um, has my view of intelligence or, uh, any other aspect of the book changed? Yeah,  

Paul    01:23:17    Well done. So the way that I asked the question, it sounds like I’m asking, what was your view before and what is your view now? That’s not actually what I’m asking, because you know, like you’ve already said, we don’t really know what intelligence is. What I’m more asking is, do you feel like your approach to understanding intelligence has changed throughout the process? So you can answer, I thought this before, and now I think this, but that might be an impossible question. Of course,  

Robin    01:23:45    I think, I think I’ve become more humble. I mean, if I, if I dig into my idiosyncratic past, um, I remember being a young 20 something year old scientist who thought he knows everything,  

Paul    01:24:00    But that’s what got you to where you are.  

Robin    01:24:02    I’ve never been more insecure than ever my life that I’m now. Um, you know, maybe that’s a good thing. I, yeah, it got me to where I’m at exactly. I’m humbled the, the, the process of writing that book and, and, and making the connections that are trying to make as a humbling experience, because you have these moments, when you realize parallels in history, you see that all these things have happened before, and you find these references in people who have thought these thoughts before. Um, there’s so much out there there’s so much beauty and so much knowledge in the literature, people who have, you know, if you just go into what people thought, if I could have gotten completely lost just in the cybernetics years, between the forties to sixties and, and all the thoughts that people had back then, that Ross Ashby and so many others, and it’s humbling, right?  

Robin    01:25:05    Because then here you are, and you’re writing this book based on, you know, there’s gotta be something that we need to talk about because we should learn from each other in the AI and neuroscience communities, based on the idea that, you know, we have a genome and you don’t, and then you dig into this and it’s of course, a bottomless pit. And so many thoughts have been thought, have been thought before. And I’m with every minute that I’m spending on a project like this, changing my brain in a certain direction and not another and time. So scarce. Yeah. I’m running out of time every day, as, I guess we are right now in our conversation, you know, if I want to investigate something, I need to write this chapter, you know, the temptation to just get lost and, and dive into this, uh, infinite depth of almost infinite depth of knowledge that is out there, it’s a humbling. And so then you, you know, you, you kind of, I find myself then disciplining myself and saying, okay, I have one week for the same. Then I have one week to interview people and to learn about this. And you know, this is how far I go. And, and of course, if you do it well, you must be after this one week at a state where you realize, oh my God, there’s so much more than I initially even thought that could be known that it must necessarily be humbling.  

Paul    01:26:34    So one of the things that you did allow yourself to dive into in the book is, uh, Roger Sperry and his work, and also Mike gaze. And you talk about their differing ideas in the book and go into some depth about it. Maybe the last thing that I’ll ask you, you talk about how there are differences in scientists, right? Some are more vocal and some are quieter. Mike gaze was a, a quieter scientist who didn’t self-promote as much, and not necessarily that Roger Sperry self-promoted, but he was more opinionated and willing to vocalize those opinions. More, both great scientists. Right. I  

Robin    01:27:13    Think it’s fair to say. Yeah.  

Paul    01:27:14    Yeah. The question is, uh, how self-promotional as a scientist, should one be, so now, you know, I’m asking someone who’s written a book and needs to promote it as well. Right. Or, you know, hopefully we’ll, we’ll sell some copies. Right. But is it better to be wrong and popular or well-known, or is it better to be right, but unknown some career advice for, for, for those homemade,  

Robin    01:27:36    Oh God, this is so difficult. I can tell you from the bottom of my heart, that I did not write this book with the intent to, um, uh, you know, make it a New York times bestseller, I would like it to become, and I’m actually delighted to see that it is being picked up more than I initially could have even hoped. Um, so it’s been doing very well, and I’m very, very happy about that, but, um, you know, writing the book is one way of being vocal, but the example of scientists you give right now is also about just how you present data. There’s no way in science to present data without interpretation, the way you presented the claims you make bringing us back to the discussion we had before about generality. Um, but also the manner in which you do this is, uh, has an enormous influence on how it is perceived.  

Robin    01:28:46    And I’m afraid that there are many examples in the history of science where a really good idea, and the person who actually got something right, has not been corrected to this day. We always say that science self-correct in the end. And I believe to the extent that I’m capable of believing anything that, that you know, is likely true in the limit of infinite time, but it’s scary how many processes, uh, many ideas have not been corrected and people actually got forgotten, although they, they set the right thing, right. Um, for that reason. So this is very tricky. I can tell you that I basically did everything wrong. I did write the book. So maybe that’s, you know, in itself something. Right. But, um, because I always felt that science should be something that is promoted by the value of the science itself, which happens through scientific publications and peer review, and then reviews about that work and discussions within the scientific community based on scientific publications themselves.  

Robin    01:29:56    Um, I, until I think two days ago or so didn’t even have a Twitter account. Oh, you’re on Twitter now. I’m not really. So we made an account for my lab. So we just published a beautiful paper 21st of December. And so I decided, you know, the he’s in your lab has now a Twitter account so that we can at least tell our colleagues, we published this paper, but this is literally, I’ve sent one tweet in my entire life. Um, that’s a really bad idea to day. If you want to write a book and if you want it to sell well, you better have a good following. You better have a Facebook presence. You better have a LinkedIn account. You better have an Instagram following. And I literally have none of these things. That’s great. Not even LinkedIn, you’ve caved in now. Well, so we see this.  

Robin    01:30:51    Yeah. You know, how, how am I morally, you know, I don’t want it to be poorly superior. Right? I mean, I don’t want it. It’s just choices we make. And so, um, yeah, there are ways how you can try to sell whatever you want to sell it more. And clearly I have not tried. I mean, there is no advertisement. There is no, uh, Twitter. There’s no tweet. There is no advertisement. There is no Facebook. There is no Instagram that smoking. There’s nothing about this book. Um, other than that, you know, here is, this is what I wanted to tell you. I did this bit of work. I put all my heart and a lot of work in it over three years, researching the history and the current state of how developmental neurobiology and artificial intelligence, see the information problem, the question of how does information get into a neural network? How does a neural network grow smarter? And this is what I wanted to do, and I wanted to contribute this and I’m proud of it. And now, you know, live with it. And if somebody wants to read it and it develops its own life, you contacted. And you’re asking me questions that belie, you know, deep knowledge of, of many of the ideas in this book. And so that makes me super happy. And maybe that’s better than the short tweet, you know?  

Paul    01:32:12    Well, Robin, you should be proud of the book. Um, I didn’t mention the way it’s laid out is a series of 10 seminars. And, um, before each seminar, uh, you have this, you have these different, um, characters like neuroscientists and a robotics engineer and, um, you know, various personalities, um, dialogue with each other. So in girdle, Escher Bach Hofstatter uses this kind of form, but it really doesn’t work like it was with the turtle tortoise and the Hare, I believe. Uh, not for  

Robin    01:32:46    You liked it. Interesting to me, honestly, I have to tell you, it’s been 20 years ago. It’s been 20 years since I read it. So maybe I should look at it again, it’s a bit different, but is that, that he used, he does this really fancy, funky thing where, where they actually talk like in the, like a Canon of . Right. So it’s, it’s very symmetric and it kind of, um, and you know, it also just probably serves a little like relaxation, right purpose that, you know, between like heavily loaded chapters that it’s nice to relax a bit, um, in, in the self assembling brain, uh, the, the semi, the, the, the dialogues a different, they have a different origin and a different idea. Um, the, the reason why they’re in there, they’re not in my original book proposal and they were not in my original draft.  

Robin    01:33:41    The reason why they’re in there is because this is what my notes look like. So when I got just satisfied with my own field, as we started this discussion, I felt like it can’t be that we’re just studying molecule by molecule or building a parts list. And we don’t even know what the end point is. I’m not sure what we’re missing there, but I wanted to know what could be out there. So I just started to go to other conferences. I just went to late artificial life conference and just, you know, let’s see what these people are saying. Right. And, and it was, it was eye opening. It was beautiful. And I met amazing colleagues that I’m working together with right now. And I had discussions with them. And so then whenever, and of course, then when you already have the idea of the book in your mind, and you go to this conference and you just, you know, Chet it up, people say, I come to you and then it just, you know, start to have a discussion.  

Robin    01:34:32    And of course I asked them certain questions, and then, you know, some people are amazing and you just get into this. And then I would always run back to my hotel room and just write down the conversation. And, you know, so as to not to lose the arguments as to not to lose, you know, I asked him this, and then he said that, and I asked her that, and then she actually said, that’s not even a good question. Do you need to do this, this, this or that? And so it was very clear that, you know, people are always talking based on their own growth history in their own worlds. Right. I mean, in the beginning, this discussions are very tricky, right? Because I’m asking one thing, but I’m getting a completely different answer and vocabulary that I’m not necessarily understanding. And then I did this in my own field and, you know, I could ask the same questions. And so I basically started to have, so I had a lot of notes from discussions and all these characters were there. There’s the robotics engineered. There’s the developmental geneticists. There’s the neuroscientist. Was that  

Paul    01:35:30    More fun to write, easier, to write harder to write than the rest of the  

Robin    01:35:34    Text. So this is where I’m just getting it right. So the, the, the, initially I didn’t read it as early as I tried to get the essence from my notes. And I try to write like a normal book. Yeah. And it was very difficult because what you lose is the, the way they talk coming from the different place. Right. If I say something as a developmental neurobiologist and try to describe something and deep learning expert tries to talk about it, we talk differently. And the way we talk is just loaded with our own history and our own way of talking. And the way we even then try to communicate becomes quite interesting. And it takes time for us to find each other. And so this is what happened, right? So I had these notes as a discussion. I tried to distill it, but, you know, just to circumscribe basically where people are coming from is so much harder and cumbersome, and it didn’t read well that in the end, I just felt like, you know, just let these people speak.  

Robin    01:36:33    And so I just started to write the discussions between those people. And it was much easier because then of course, you know, the developmental geneticists can say these outrageous things that I wouldn’t even as an author dare to dare to write in a book, because I think this is like, hardcore is too much, you know, it’s not that deterministic and so forth, but as a character, I can have a developmental geneticists say all these crazy things. Um, because that’s where people are saying, yeah. And then I can have a robotics engineer, of course, come in and say, you know what? We don’t need all your wetware. And it’s all nonsense. And we can do this all much better. You know, I can, I don’t need a bird with all this stuff. I can make an airplane that flies much faster, but these things to say, these things is best said, if you have characters who are themselves and have their own growth history, if you will. And then of course, that became a beautiful challenge over 10 dialogues to have them find each other. Right? So in the first dialogue between them, they’re there, you know, they’re all talking cross purposes, right? They’re all just, nobody understands each other and they don’t even appreciate each other. And so of course, what I hope happened to them is that in the end, they understood each other a bit better.  

Paul    01:37:43    Robin, thanks for writing the book. Thanks for unfolding information with me here today and spending your time and sharing it with the podcast listeners. You’ll be getting a tweet from me soon about the book  

Robin    01:37:55    And you know what I’m going to do. I’m going to retweet that That’s B that might just be the second tweets that I will ever have.  

Paul    01:38:04    Yes, you are on top of your game, sir. So thanks Robyn. It’s been fun.  

Robin    01:38:09    It’s been awesome. Thank you very, very much.  

0:00 – Intro
3:01 – The Self-Assembling Brain
21:14 – Including growth in networks
27:52 – Information unfolding and algorithmic growth
31:27 – Cellular automata
40:43 – Learning as a continuum of growth
45:01 – Robustness, autonomous agents
49:11 – Metabolism vs. connectivity
58:00 – Feedback at all levels
1:05:32 – Generality vs. specificity
1:10:36 – Whole brain emulation
1:20:38 – Changing view of intelligence
1:26:34 – Popular and wrong vs. unknown and right