Brain Inspired
Brain Inspired
BI 125 Doris Tsao, Tony Zador, Blake Richards: NAISys
Loading
/

Support the show to get full episodes, full archive, and join the Discord community.

Doris, Tony, and Blake are the organizers for this year’s NAISys conference, From Neuroscience to Artificially Intelligent Systems (NAISys), at Cold Spring Harbor. We discuss the conference itself, some history of the neuroscience and AI interface, their current research interests, and a handful of topics around evolution, innateness, development, learning, and the current and future prospects for using neuroscience to inspire new ideas in artificial intelligence.

Transcript

Tony    00:00:04    Goal of the conferences to bring machine learning and computational neuroscience back together. Again, a lot of the, uh, major insight in deep learning and artificial intelligence came from neuroscience. In fact, you could, you could basically say that almost all of them.  

Blake    00:00:26    There has been a lot of interest in the computational neuroscience community in bringing machine learning and AI back on board, but the other direction has yet to be fully recouped. So that direction of taking inspiration from the brain to build better AI systems is precisely the gap that I think we wanted to fill with this conference. And which is arguably still a gap.  

Doris    00:00:53    I mean, I, I feel like this, this neuro AI is as fundamental as, you know, physics or chemistry. It’s, you know, the study of intelligence perceptional, these, you know, there there’s certainly like fads, I mean, in, in how you analyze data using neural networks and so on, that’s all true. But, um, yeah, the fundamental quest to understand intelligence, I don’t see how that can be called a that  

Speaker 4    00:01:20    This is brain inspired  

Paul    00:01:34    Step right up folks and get your tickets. That’s right. Get your tickets to the nicest conference this year. Hello everyone. It’s Paul nicest stands for, from neuroscience to artificially intelligent systems. This is a conference held at cold spring Harbor where the goal as you’ll hear Tony’s eight or say is to bring together the machine learning world and the neuroscience world with a particular focus on how neuroscience can help inform machine learning and artificial intelligence. So I had noticed that the deadline to submit abstracts and to get your tickets to, uh, join the conference, uh, is coming up actually it’s January 21st, right after the release of this podcast. So I thought it would be fun to have the three organizers of the conference this year on the podcast, just to have a broad conversation about their interests and topics related to the conference and neuro AI in general, Doris Tsao is a new voice on the podcast.  

Paul    00:02:35    She runs her lab at UC Berkeley, and she’s interested in how our brains form in variant representations of objects. And you could say she’s a, well-known already for her work, studying face patch areas in the cortex, in non-human primates. And in humans, Blake Richards is at McGill university. Uh, the last time he was on the podcast, we talked about largely his work, um, figuring out how backpropagation or, uh, something like backpropagation might be implemented in brains. And in this discussion, uh, we talked about his more recent interests. For example, figuring out how multiple streams of representations can be combined to help us generalize better. And Tony Zadar is at cold spring Harbor. And the last time he was on the podcast, uh, we focused on his paper, a critique of pure learning in which he makes the case that, uh, we need to pay more attention to evolution and the innate, uh, structures and abilities that organisms come into the world with.  

Paul    00:03:36    And during our discussion, we revisit, uh, the ideas from Tony’s paper, uh, and use it as a springboard to talk about development and learning and how these processes could be considered one kind of continuous optimization process. And in general, we have kind of a wide ranging discussion about many of the issues that are relevant to the nicest conference. So I encourage you to go to the nicest website, which you can find in the show notes at brain inspired.co/podcast/ 125 and consider whether it may be of interest of you to attend this year or another year. Thank you for listening and enjoy. So I thought we would just start, um, not so much by giving an introduction of yourselves, but maybe you, you guys can talk each about something that kept you up last night that you’re thinking about. Scientifically, I know that there are many things that keep people up, uh, these nights, but, uh, in the science realm, something that you’re working on, that’s just at the edge of your knowledge, Tony, would you want to lead us off?  

Tony    00:04:38    Sure. Um, well, so my lab is pretty diverse. So what keeps me up one night, um, isn’t necessarily what keeps me up the second night. I don’t get a lot of sleep at all, but most recently, what, what has been keeping me up is, uh, we’ve been working on applying, uh, some ideas about the genomic bottleneck to reinforcement learning. And we’ve been trying to figure out how we can compress the networks that we use for reinforcement learning by a couple of orders of magnitude and see if that can, uh, give us better generalization, uh, better transfer learning. And so there’s a lot of, a lot of exciting stuff going on there, but, um, that that’s sort of at the, at the edge of what we can do with that, it seems to be working, but there are some, some, some things that aren’t quite quite quite there yet. So we’re pretty excited about  

Paul    00:05:37    We’re going to come back to that as well. Uh, I have questions about that. All right, great. Uh, Doris, do you want to, uh, what, what kept you up last night?  

Doris    00:05:46    Yeah, I don’t know about last night, but the question that, um, I’ve been obsessed about for a long time now is how the brain solves segmentation. You know, we see our vision is really based on objects and there must be some way that the brain manages to bind all the parts of an object together and track those parts as they move around. And, um, whatever the code is, it should be fundamentally different from anything that we understand right now, because it has to be a dynamic code, right. The object comes twice as big. Like that code has to expand. Um, sometimes, you know, in computer vision, people like use colors and what is the analog of like this color label in the brain? So I would, you know, I don’t give everything to know the answer and that’s one of the big problems we’re working on right now.  

Paul    00:06:35    So you’re sort of famous for, uh, faces, right? Face, uh, patch areas, uh, in, in brains. But that wasn’t your original interest. Your original interest was, was objects. And now you’ve returned to that.  

Doris    00:06:48    Oh yeah. You know, I mean, I got into faces. I joke that I was like this, you know, 20 year detour and I’m now I’m doing back to what, what I want to do. Um, originally, right. My, my first experiment, when I was red student, I set up Machia for Mariah and I showed monkeys pictures of stereograms. Cause they want to understand how they represent 3d objects. And then, you know, I read the paper from Nancy Cowiche about faces and it seemed like a fun project, but might not work to, to show monkey spaces. And that sort of took its own life,  

Paul    00:07:17    Uh, 20 year diversion then. Huh? That’s I guess that’s how science work.  

Doris    00:07:22    I, I hope that they’ll come together. You know, what we’re discovering is base patches and it’s not, it’s not really about faces for me. That’s not, I don’t, I could care less about basis. There’s like one part of the brain that I could have leash and it would probably be my face there. So it wouldn’t, you know, be so shy. Um, it’s, you know, the faith-based patch system is beautiful. Um, we, we call it this turtles underbelly, right? It lets us get at the mechanisms for how the brain represents high level objects. And it gets us, uh, an experimental handle to all kinds of questions related to high-level object representation you have, including, um, one of the questions I assume we’re going to be talking about like unsupervised learning, how does the brain learn to recognize a face? Um, just from a few observations. And I think that’s also going to connect to this tracking and segmentation problem I talked about. So, um, yeah, it’s the face patch listening is a lot more than just about how, how brain represents basis.  

Paul    00:08:16    Is it a solved issue? I mean, there was controversy right over whether this particular brain area, uh, speaking of Nancy, Ken wishers work, uh, really is, um, representing faces. Is that solved? Is that, uh, are we done with that?  

Doris    00:08:32    Uh, no, it’s not salt. I think we do have a lot more insight into it. And one of the insights has actually come from deep networks, right. That I came on the scene, I don’t know, five or 10 years ago. So for the longest time Nancy’s lab and others had discovered these little islands of cortex selected for faces, bodies and colored objects and other things. And we have replicated this in monkeys. Um, and it was a total mystery. If there’s any principles governing how all of these regions are organized. Right. And there was also islands of surrounding cortex that no one had any idea what they’re really doing. And so it was this kind of, yeah, it was a big question, mark. And then just, you know, some sense, maybe it’s, there is no principal at all, right? This is what you get. And you get some islands of cortex that represent things that are understandable and other islands that don’t, and then deep networks, um, came on the scene and, and my postdoc can like bow, I’m not going to go through the whole story, but he basically, he did a very simple analysis.  

Doris    00:09:33    He passed a large number of objects through Alex net, um, and just did principal components analysis on activations and layer of C6. And then you can just look at the first two principal components and they span an object space. And the amazing thing is that if you look at what’s in those different quadrants of that object space, one quadrants turns out two faces. Another quadrant turns out to be things that look like bodies. And so, you know, something clicked and I was like, whoa, what if all of it cortex is actually laid out according to these two axes of object space that you discovered with the deep network. And that made a new prediction about a new network that we should find them turn out to exist. So, um, you have to first approximation, it seems like it cortex is actually laid out like an object space and based patches are one question of that. So, so it starting to make sense, but I think these patches are also, they also are specialized for basis in a very strong way, right? If you just like the contrast of a base, the cells response goes way down and love those things. Can’t be explained by just projection out to this generic object space. So it’s still an open question, but we have a lot more insight now,  

Paul    00:10:45    Blake, you, uh, have you figured out where you up last night, thinking about how face patches get learned in brains?  

Blake    00:10:52    Um, not specifically, but the thing that’s been keeping me up is related to some of this stuff. Um, so, uh, one of my big worries right now is the question of how to develop artificial intelligence that can engage in what we call systematic generalization. So that is not just generalization to unseen data points or even data points that might come from a different distribution, but specifically generalization that obeys some systematicity or some rules as it were. And humans are pretty good at this, right? So you can look at some puzzle, like I can give you some kind of shape based analogy where say, I show you a square, a triangle, and a square, and then I show you a circle, a diamond, and you have to fill in the last one and you’ll immediately kind of detect the rule above you say, okay, it goes one shape, two shape back to the first shape.  

Blake    00:11:51    So then you apply it again immediately. You can get the answer, you don’t have to see any data points. It just, the rule makes sense to you right away. And this is actually surprisingly hard to get in vanilla artificial neural networks. They don’t show this kind of systematic generalization and the old school answer to what you needed for that was symbol systems. And that’s still the answer some people give, but for a variety of reasons, which we can discuss, if, if we wish I’ve come to the conclusion that I think that systematic generalization doesn’t depend on the existence of a symbol system or anything like that, it just depends on the existence of separate representations for static object features in the environment. And on the other hand relations between objects, be it dynamic movement-based relations, or just spatial relations or any other kind of relation.  

Blake    00:12:50    You need this, this distinction between the objects you wreck, the objects that you represent and the relationships between them and those have to be represented by different systems. And if you have these separated, um, representations, there’s some data coming out of a few labs and some preliminary stuff from my lab as well, showing that then if you have these separate representations, you do get systematic generalization. And so then the interesting question for me is how could those separate representations possibly emerge? And this is where we publish a recent study showing that, um, if you optimize a deep neural network that has two different anatomically segregated streams, and you optimize it with an unsupervised loss to do prediction, you’ll actually get kind of segregated representations for static object features and movement or relation features. And so this is, um, I, I kind of broader interests now for me, that ties back into the unsupervised learning question, because basically I’m starting to think more and more that maybe the way we get to systematic generalization in the brain is by having systems that through evolution or learning in the lifetime have been optimized in such a way that you get separate representations for the relationships between objects and the objects themselves.  

Blake    00:14:13    And I think that once you have those separate representations, now you can get systematic generalization. And the reason is actually pretty simple it’s because systematic generalization depends on you having a sense of there being relations that can be applied to anything no matter what the object is. And so once you have those separated representations for your relations, that becomes possible,  

Paul    00:14:37    Uh, is hierarchy involved here or because it’s the way you described it, it sounds like a single level, right? So two representations, and then there’s some generalization, but is there a hierarchical structure that you’re thinking about as well?  

Blake    00:14:49    I mean, you definitely need hierarchy if you’re dealing with any kind of complex high dimensional input in principle, I think this same, uh, rule that you need separate representations for your relations and your objects in order to get systematic, generalization could even apply in situations where you already have a simplified representations that don’t require any additional hierarchy to get you what you need, but in 99% of the tasks that we would ever want an AI to do. And certainly for everything the brain does, yes, you need hierarchy because you don’t, you don’t care about say pixel over what relationships, right? Like you don’t care really about what this retinal ganglion cell and this retinal ganglion cells relationship are to each other. What you care about is the relationship of say, you know, where is my face relative to this other person’s face or something like that? Like, these are the sorts of high-level relationships that you care about. And so that requires hierarchy.  

Paul    00:15:46    So I was just reading about induction and deduction and abduction and how humans are so great because we are great, uh, abductors, right? We, we perform abductive inference. Is this related at all to that, to that? Uh, forgive me for the naive question.  

Blake    00:16:03    No, I mean, it’s definitely related to this stuff because what you could say to some extent is that standard deep neural networks are really good at induction. Um, and you know, I think there’s a lot of evidence for that at this point in time. And so both deductive and inductive reasoning are arguably still missing. And indeed, when we talk about systematic generalization, that is precisely related to, to these questions.  

Paul    00:16:32    So, uh, Tony, I know that you hate learning. Uh, the last time you’re on a podcast, we talked about your paper, a critique of pure learning personally, right? Yeah. You seem incapable. No, that, so let’s talk about the, uh, the conference that’s, that’s coming up, actually the deadline to apply and submit abstracts, right. Is just, uh, January 21st, I believe. And so this podcast, yeah, from neuroscience to artificially intelligent systems nicest.  

Paul    00:17:08    So it’s kind of interesting because, uh, I mentioned Tony’s paper because, uh, it is, um, in some sense, the antithesis of learning, that’s not true obviously, but, um, so you, but the rage these days is using these learning systems, artificial neural networks, deep learning networks. And Doris has already mentioned, uh, her work with the unsupervised learning and Blake, uh, just mentioned, uh, the same first of all, what’s the conference and, uh, what’s the goal of the conference. And then how did, uh, someone who, uh, is so anti deep learning networks, uh, come to be one of the, uh, organizers and are you the dissenting voice among, uh, the attendees?  

Tony    00:17:51    Um, well, I’ll, I’ll start by saying what’s the conferences, the goal of the conferences to, in some sense, bring machine learning and computational neuroscience back together again. So a lot of the, uh, major insight in deep learning and artificial intelligence came from neuroscience. In fact, you could, you could basically say that almost all of them, all of the major advances in artificial intelligence came from, uh, looking at neuroscience. So the very idea of formulating the, the question of our official intelligence, uh, uh, the interactions between collections of simple units, which we might be tempted to call neurons suggests its deep roots, right? And in fact, interestingly, even the annoyment architecture, which is in some sense, the opposite of, um, sort of neural network type architectures, even that architecture was an attempt by Von Neumann and explicit attempt by  to model certain aspect or at least capture certain essential features of how, um, the nervous system works.  

Tony    00:19:08    If you go back to the technical report from, I think around 47 or something on the first one, Norman computer, he devotes an entire chapter to comparing how the architecture that they propose relates to that, of, um, the brain. Um, and so, you know, convolutional neural networks, um, and the, the ideas of reinforcement learning, um, all of these come from tapping into neuroscience, but, and in fact, in the early days, uh, neurons, which was back then called nip, uh, was a meeting that drew together, both people from, uh, machine learning and neural networks and people in computational neuroscience. In fact, they were the same people. I mean, that was the, that was the meeting that, that was like my go-to meeting when I was a graduate student. It was the only place, the only sort of substantial meeting where you could present computational neuroscience. Um, but by the, by the nineties, by the mid nineties, those two fields had sort of diverged to the point where it wasn’t really sort of useful to have them as one meeting.  

Tony    00:20:20    And nowadays, I think most many, at least many people who work in, in, uh, artificial intelligence has sort of lost sight of the fact that any knowledge from from neuroscience was perhaps anything. But if you like, uh, an inspiration or an existence proof, so, you know, to, to hear him a lot of modern AI people talk, the, the role of neuroscience in AI is comparable to the, let’s say, um, role of birds in, um, aeronautics engineering, you know, yes. In, you know, in the beginning, man looked up at flying birds and said, if only we could fly too, but that’s where the connection stops. But, but of course that that’s not really true. So the goal of ISIS is to, to bring these two communities back together and have sort of, uh, get a conversation going again so that, you know, in the event that their current technologies for current approaches sort of asymptote at some point, which I, you know, incredible though these grants is, are, I think, uh, we still will need new ideas will sort of provide the foundation for those new ideas in this, in this meeting.  

Blake    00:21:38    If I, if I can add to that too. I think one of the interesting things about the way that it’s evolved in recent years is that, um, there has been a lot of interest in the computational neuroscience community in bringing machine learning and AI back on board to, to kind of do our explorations of the brain, but the other direction has yet to be fully recouped. And so that direction of taking inspiration from the brain to build better AI systems is precisely the gap that I think we wanted to fill with this conference. And, and which is arguably still a gap because, you know, if you go to co-sign or whatever, you see a lot of deep neural networks, a lot of AI stuff, but they’re all addressed at answering questions about the brain. Whereas at neuro rips, I would say though, there is a growing neuroscience contingent, it’s still a very small part of it. And it’s by no means that the mainstream of the conference  

Paul    00:22:40    Doris, do you agree with, with all of that? So you, um, you know, you were just talking about using unsupervised learning models to inform the neuroscience, which like Blake was just saying is the general trend of the direction of the arrow these days. But, you know, just from the title of the conference from neuroscience to artificial intelligence systems, uh, you know, speaks to the, the arrow that Blake was talking about. Do you agree that, um, that there, well, first of all, there’s lots of things. So what Tony was talking about, uh, the original inspiration, you know, trying to bring that back, do you agree that it went away that it’s, that, uh, the AI community, uh, doesn’t appreciate neuroscience and then also in your own work, um, you seem to be, you know, uh, going the normal way, the modern, normal way from AI to neuroscience. Uh, do you have aspirations to go to reverse that arrow? Sorry. That’s like seven questions.  

Doris    00:23:35    That’s a lot of questions. Yeah. So I first, I should say, I’ve, I’ve never attended this nice nicest conference before, so I’m super excited. Um, I’m not totally sure what to expect, except that meet some incredibly smart people thinking about this question of how brains inspire machines and vice versa, how machines can inform our understanding of brains. I can’t, I don’t, I don’t know that people in AI have been thinking last 10 years. I think some of them have been deeply interested in the brain throughout. Right. I think Jeff Hinton has always saw himself first as someone who his goal is to understand the brain  

Blake    00:24:10    A hundred percent. And let me say there, there has always been the remaining core community in the AI world that believes in the need for taking inspiration from neuroscience. Huh?  

Tony    00:24:26    I think actually some of the most influential people, precisely the most influential people are the ones who do keep paying attention to neuroscience. I mean, clearly young Coon, clearly Joshua Bengio clearly cares, I think. Um, you know, so, so th th the people who have made many of the major, uh, advances actually were paying attention, I think that what is lost is for the younger generation, I mean, uh, sort of modern AI has become such a large field on its own that it sort of feels like it’s self-contained. I think that that’s really the, the issue it’s, it’s almost as if one were to, um, you know, try to try to try to, um, make it fundamental advances let’s say, in electrical engineering without quite understanding, um, the underlying physics.  

Blake    00:25:29    Yeah, I can. I also, I want to add to that, cause I think there’s a funny dynamic that has come about because of the fact that as Tony said, the most influential people are, are the ones who still fundamentally, both seek and believe in the need for inspiration from the brain. And that is that, um, there’s a large contingent, I feel like of AI researchers who see themselves almost as like rebels or something like that for articulating the idea that we never need to look at brains. And this is the sort of like the cool thing to say as it were for some people, um, precisely because they, they, they see someone like Jeff Hinton or Yoshua, Bengio say, oh, brains are critical inspiration for AI. And they’re like, no, no, no, that, you know, I’m going to, I’m going to show that’s nonsense. These old guys don’t know what they’re talking about.  

Blake    00:26:21    And so I have many interactions with young researchers and on some level, their skepticism, I think is good. It’s healthy to be skeptical of what the older generation tells you. But it’s always funny for me when I have conversations with some really young people in the tech world. And they say to me, wow, you know, none of this really has anything to do with brains. It’s all just matrix multiplication and stuff. And meanwhile, you know, I, a part of me wants to sit down and say, well, listen up Sonny, Jim let’s, let’s do a history lesson here and go through the entire process with them. So what’s funny is that I think that many AI people have left the neuroscience stuff to the side. And some of them see that as a sort of like bold rebellion against the old guard.  

Doris    00:27:06    Oh, I was just going to say Tony, I mean, this also relates very much to your, your, um, famous essay, right? I mean, you know, we shouldn’t ignore these hundreds of millions of years of evolution. Like the brain has figured out so much structure and we should, you know, get a huge leap. We can, we can figure out what those structures are. What are those fundamental structures that enable intelligence? Like, it just seems ridiculous to, to ignore that, like why.  

Tony    00:27:31    Yeah, absolutely. I mean, that’s what, that’s what, that’s what really keeps me up at night. It’s the idea that, you know, it’s like if we want to achieve faster than light travel and some aliens plunk down a spaceship capable of faster than light travel, right. We would sit there trying to reverse engineer that spaceship to figure it out. Right. So we have that. We are surrounded by creatures that have solved the problems that we’re trying to solve, not just humans, but animals, simple animals who are outperforming us, worms, flies, bees, spiders, uh, my dog, rats, they’re all outperforming many, many, many things that we wish we only wish we could, we could, um, build machines to do. And some of them are so simple and we still don’t understand them. It’s embarrassing. Right. We there’s this, um, you know, this great cartoon of a bunch of, um, of, uh, Lego pieces, right. And it’s just an empty box and like, ah, okay, we, we have everything. We just need to figure out how to put them together. We know so much, and yet we don’t quite know how to put them together and in the appropriate, uh, meaningful way. Um, so that’s what, that’s what, you know, it’s just such an obvious source of not just inspiration, but specific guidance.  

Tony    00:29:07    When I was in, in, um, in graduate school actually, like I think it was one of the first, it was a summer course at MDL that I was at, you know, people were staying up late and drinking and you know what one very senior neuroscience respected neuroscientist was, um, talking about how, you know, we were on the brink of understanding how the brain works and he started profits and the coming of the ma he basically started prophesying the coming of the Messiah of neuroscience who would, you know, sort of reveal the truth to us. And, you know, maybe he had, uh, he was, he was, uh, he had a little bit too much alcohol onboard. And so he was, uh, you know, personifying it. But I think many of us feel like we’re just on the brink, like if only somebody could explain to us what we’re missing. Right. And some of us, maybe even what I hope to discover that missing thing, it’s just so frustrating that we, we know so much. And we don’t quite know what we don’t what, you know, that missing piece.  

Doris    00:30:18    Yeah. I mean, yeah. Blake was talking about this factorization between relations and, um, you know, w what the object properties. Right. And so that reminded me, I, him at this, how you think about it, Blake, but, you know, when you try to generate invariant representation, you kind of, you know, on the one hand you’re saying that this thing that’s transformed is the same. So you extrapolate those various features at the same time. You want to know what that transformation is right now. Did it rotate? Did it expand? And, um, so I, I think that I, you know, I think a lot of the structure that somehow built in through these, you know, this genome that’s supposed to run the wire down here is tick struck the 3d world, right. That the geometric aspects, like how things transform, like being able to deal with that. Cause once you have that, then you can, you know, do unsupervised learning all of that. Right. Because you can track this object and you have like suddenly millions, billions of training samples for free. So I that’s, my, my hunch is like a lot of it, like if we can understand that, um, it will go a long way. So I really resonate with Blake there. And, um, yeah,  

Blake    00:31:19    One thing that I’d say about that though, and I think this gets at, um, where Tony’s essay has influenced my thinking a bit more, and I think, um, is really an important thing to remember when we’re talking about inspiring AI with the brain is precisely as the Doris kind of gestured up there. These, these systems, if we think about the visual system for a moment, you know, it has surely been optimized over the course of evolution, uh, to engage in exactly that kind of invariant representation for the object. And then you have your representations of the spatial relations and stuff. So the object can rotate and it doesn’t look different to you. And this is all built into our genomes, but, you know, cause I, I suspect that there’s some of that in animals, the instant they’re born. But then on top of that, there’s some, uh, a whole layer of supervised unsupervised learning, sorry, unsupervised learning throughout the early life.  

Blake    00:32:17    That takes those underlying inductive biases that help us to segregate out kind of constant objects and relations between objects and stuff. And then can do a lot more learning on top of that so that we can learn really particular features of particular objects. And, you know, this is how a cat moves. This is how a ball moves. This is the nature of, you know, uh, playing with, uh, spinning top, et cetera. And, and, and all of those particular relations and properties that hold for the unique objects that evolution couldn’t necessarily have known about in advance our, what we learn through unsupervised learning, but that’s all done on the base of a fundamental, very strong inductive bias to have these in, in variant representations of constant objects and relations between them in a 3d world.  

Paul    00:33:10    Since you mentioned, uh, Tony’s paper, we don’t need to make the whole conversation focused on this. But so I recently had Robin he’s singer on the podcast. I think it may have been last episode actually, he’s the author of the self-assembling brain. And, um, the way that it’s sort of pitted generally is there’s evolution, there’s innateness, right? So we come into the world and there’s the structure, um, which is encoded in the DNA somehow. Right. And then there’s learning on top of it. But his argument is that, uh, what we are forgetting, which is a impossibly, uh, complex myriad of, um, information unfolding is what he calls it, is that during, from genes to the connectome, that developmental process is a crucial missing aspect. Um, and he kind of considers it, um, an algorithm from the DNA to the connectome because our DNA can’t specify the entire connectome. Right. But then, uh, then on top of that, there’s learning. So do we need to consider development or can we really just figure out the strut, the right structure, uh, and build that in a structure in the connectome and or in the case of artificial networks in there?  

Tony    00:34:19    Yeah. I think it’s clear that the way you get, I mean, it’s not as clear, it’s that the sort of way that I would just think about it, that the way you get from a genome to any physical structure is via development. And the, the observation that the amount of information in the genome is orders of magnitude lower than the amount of information in the connectome implies that there have to be actually relatively simple rules for going from a genome to connect them. And those are developmental rules now on top. So some of those rules are, are, um, kind of be like activity dependent and it’s those activity dependent rules probably that over the course of evolution got sort of co-opted and formed the basis for activity dependent learning. In fact, from a neuroscientist point of view, at least from a synaptic neuroscientists point of view, it’s sometimes pretty hard to distinguish mechanisms for development from mechanisms, for learning, you know, uh, LTP long-term potentiation is the leading candidates and ethic mechanism for, uh, learning and memory.  

Tony    00:35:44    But in fact, some of the, uh, earliest results in LTP were in development. So it’s really, there is no sharp distinction from an organism’s point of view, uh, between, uh, mechanisms of development and mechanisms of learning. Some of the very earliest mechanisms of development, um, are, are clearly distinct neurogenesis and things like that probably are, um, you know, and also laying out the, the, the basic wiring diagram of a neural circuit don’t necessarily depend on, um, activity, but for the most part, you know, learning and development, they go hand in hand in biology and, and the distinction between them is kind of artificial.  

Blake    00:36:32    Yep. I would a hundred percent agree with that. And I think that Tony made another really interesting point there, which is that what we call learning is probably a series of other mechanisms related to general developmental properties of the neuro nervous system that got co-opted over the course of evolution and which somehow in mammals and some birds and stuff got linked into specifically, um, things like error reduction mechanisms. And that was what then transitioned us towards what we might call learning and the proper sense of it,  

Paul    00:37:10    The proper sense. So there’s a bias right there, right.  

Blake    00:37:13    Well, okay. So here’s, here’s, I guess what I would say about, you know, where I distinguish learning from other activity dependent properties, and this goes back to work. I did in my PhD, uh, where I was doing a lot of work on synaptic, plasticity and tadpoles. And whenever we would show changes in the tadpoles visual system, as a result of, you know, activity dependent processes, people would always ask, well, how do we know that that’s, uh, not just, you know, some program that the genome has built in it, but which needs some activity to unroll. And the answer was always, well, we look for specifically stimulus instructed changes. So if you can show that the nature of the changes depend not just on, you know, there being activity, but on specifically the stimulus, you show the animal. And so if you show different stimuli, you get different results in terms of how the brain develops. Then you’ve got something that’s, you know, arguably learning, cause it’s actually reflecting the animal’s experiences rather than it being simply a gate that opens to allow for the developmental program to enroll  

Paul    00:38:23    Blake. I was going to ask you about this anyway. So I’m going off of what you just said. I was curious, you know, in Tony, um, you brought up LTP and synaptic mechanisms of learning. What, what your take is on the new, uh, dynamical, uh, fad, where you’re looking at manifolds changing and, um, neural activity, uh, progressing through a manifold, low dimensional space, and that learning can take place in the dynamics of the network, um, that it’s not all plasticity based. Are you on board with, with this story?  

Blake    00:38:56    And I’m certainly on board with it. And I mean, I think we’ve known that for a long time, because there are certain types of tasks that you don’t need long-term potentiation for. And so therefore it has to be something other than synaptic plasticity on some level, right. Um, and the dynamics is a reasonable place to start. I think that, uh, the, you know, my favorite demonstration of that was actually a paper from Jane Wang and Matt Bob, v-neck where they do metal learning in a neural network and a deep neural network. But the metal learning quote unquote is interesting because the inner loop was actually just dynamics of activity. And they show that if you, if you train the network such that the dynamics of active activity represent your sort of plasticity in the inner loop, and then you’ve got your outer loop where you actually change your synaptic connections, you can end up recapitulating a lot of really fascinating experimental evidence related to how animals and people use their prefrontal cortex to solve a whole host of problems. Um, so that’s just one example paper, and there’s been a few around for awhile. So I think that trend is, is, you know, gaining steam precisely because on some level there’s, there’s something really, really  

Paul    00:40:06    Well. I had that. I was up, uh, too late the other night, and I had the thought that maybe the quote unquote learning, you talked about what proper learning that the learning that’s taking place in the dynamics, uh, may not be considered learning per se, but just movement among an inductive bias that’s already built in. Right. Uh, and that inductive bias is built in through the synaptic connection weights, right? So it’s, it’s like we can, we have these capabilities of, um, moving along the dynamical, uh, manifold landscape to throw some jargon out there, but we can only move into spaces that, um, already exists. It’s not like true, quote-unquote learning that’s happening because we’re already set up, we already have those available spaces to, um, to visit  

Tony    00:40:52    Well, what, what makes that less learning life than any other kind of learning? I think pretty much we, we can only learn things that we can learn, right? Like, you know, a quintessential example of things that we can learn. Uh, I think we can learn is, is language. And yet, you know, I believe that we have a circuit that predispose us to learn language. Now, you know, the details of the specific language we learn, you know, it depend on the language that we’re exposed to. And it’s hard to articulate exactly what it is that is common among all languages, but it’s still mean, I think it’s pretty clear. We have some, some, if you like innate circuitry that enables us to acquire a language very quickly, and there are some flops there, some, some three parameters that get filled pretty quickly over the course of the first few years of life that allow us to acquire sounds and words and, and basic syntax and grammar. Yup.  

Doris    00:41:59    Yeah. Also I would say I’m from the experimental side now there’s some amazing experiments being done with BMI’s to see the capacity for the brain to learn. I think it’s like really, for me, at least it was kind of shocking that you can, you know, set up a BMI. So, um, like a mouse can learn, um, to control a cursor based on the activity of pretty much like any neuron, you know, an arbitrary chosen set of neurons, right. In, in, in some arbitrary piece of brain, like they can control a cursor by controlling that activity like that they could learn to do. That was, it was pretty shocking to me. And I, you know, it sort of goes against this idea that you’re only able to learn very specific,  

Paul    00:42:37    But in that case, so just continuing on my late night thought experiment, uh, in that case, couldn’t you argue that the mouse already had the ability to make those movements, right? So it can exp it can’t explore some completely novel, um, way of, of mapping. So in my thought experiment, it would require like actual changes in the synaptic structure, right. In, in the, um, connections between the units, because you could say that, well, the mouse already had the ability to, um, visit those, those, uh, spaces and already had visited those spaces probably right throughout time. So it, it’s not that challenging to remap the population, um, dynamics to that space. Does that make sense? I don’t know why I’m arguing about this. It’s this is about you and not me about you guys, not me. So I’m sorry  

Doris    00:43:27    To me, it’s still pretty stunning. Like you choose like an arbitrary set of neurons. Like they, I, who knows what they’re actually coding and you can just get the mouse to use those, the activity of those neurons to control this thing. It suggests something about incredible flexibility. Right? You mentioned remapping it, there has to be some kind of remapping and whatever the mechanism is, this has to be incredibly flexible. And it gets us this question of how do you, how, how does the brain do this dynamic routing, right? Like, like I tell you, you know, um, Paul have you, if you see me at nicest, like give me a hug. And if you remember that, like you’ve suddenly made a connection from, you know, your face cell representing me to, to your, you know, cells representing hugging. And it’s a dynamic connection. That connection has never happened before the cat. How on earth do you do that?  

Paul    00:44:15    But I, I have not hugged many people Doris, but I have hugged people. So that’s within the realm of my current capabilities, right?  

Doris    00:44:21    Uh, to wire it’s specific for me, it’s like the magic part.  

Blake    00:44:26    And I, I just want to say something that I think gets at what you’re trying to up Paul, which is, and this ties back to Tony’s first answer to you as well. All learning systems are constrained on what they can learn. There, there is no such thing as a learning system, that’s not constrained on what it can learn. And in fact, this has been proven mathematically with the no free lunch theorems. If a learning system truly has no prior on what it can learn, it basically just learns everything poorly. So to get good learning, you necessarily have to constrain your system to learn well in certain areas. And in this way, um, you know, if we show that, say brains, have certain restrictions placed on the sorts of things they can learn, uh, that’s unsurprising. That would almost, it would be more surprising if that didn’t exist.  

Blake    00:45:15    And that’s where I agree with Doris his point, which is that sometimes it’s shocking, the things that brains apparently can learn. In my opinion, when it seems like it shouldn’t necessarily be something that’s learnable that, like, why would we not be constrained to, to learn that, you know, I suppose I think different species have different degrees of this. And so for me, I think humans are remarkably adept at learning a surprisingly large number of arbitrary things. And, but that doesn’t mean that we’re not constrained. We’re very much so still constrained. It’s just that it’s surprising how arbitrary it can be. Yeah.  

Tony    00:45:54    Just to circle back to the question that you asked awhile ago about, do I hate learning? How much do I hate learning?  

Tony    00:46:06    I obviously I personally hate learning things, but, but I think that the point that I was trying to make in that essay was not that learning isn’t a thing that exists, but that a great deal of what non neuroscientists sometimes imagine depends on learning. Probably doesn’t depend on learning by an individual over the course of his or her lifetime, and that we are biased by paying attention to humans who probably learn way more than almost any other animal, probably more than any other animal, but even we don’t learn as much as we think we do, but animals, most animals don’t actually require a great deal of learning to, to function properly. So they’re capable of learning, but if you look at most insects, they can’t afford to spend their first couple months figuring out how the world works, right. They come out of whatever it is that insects come out of.  

Tony    00:47:14    And they’re, they’re pretty much ready to roll, right. Or fly or bite or crawl or do whatever they’re going to do. Yeah. I mean, you know, I have colleagues who study learning into Sophala and so, you know, flies are capable of learning, uh, and that certainly is adaptive, but many of the things that we’re impressed at that, um, insects and frankly, even mammals do probably doesn’t require a great deal of learning. In fact, probably maybe just a bit of fine tuning to the environment. So, you know, you’re watching a squirrel jumping from tree to tree that thrill didn’t like figure out de Novo how to jump from tree to tree, like all squirrels learned to do it pretty well.  

Blake    00:47:57    And I think I just want to note something, cause I think there’s this misperception that there’s a big divide on this question. Tony has actually convinced me of this point. And, and I really don’t think that it’s incorrect to say that for the vast majority of species, a lot of the learning that has occurred, quote unquote, was actually optimization over the course of evolution. I think that, um, what is maybe sometimes misunderstood about this argument though, and, and Tony or Doris, you can disagree with me if, if, if you do on this point, is that, that doesn’t mean that then for AI, the message is hardwired human engineered features because the, the problem, the, the mental jump that people are making there is they’re saying, okay, animals have a lot of innate machinery. Therefore we should give AI and AI machinery, but they’re forgetting the animals.  

Blake    00:48:57    And AI machinery was delivered Kersey of an optimization routine rather than a human engineer. And this is the problem because human engineers suck at delivering the kind of things that you need for AI. That’s what we discovered over the course of 50 years of failed AI research. So, you know, although everything that is Tony’s saying is true, animals have all this innate machinery and, you know, a squirrel probably doesn’t they tweak whatever existing programming is there in order to learn how to jump from tree to tree, that doesn’t then mean that the solution to AI is for us to sit down and try and be like, Hmm. Okay. Let’s think.  

Tony    00:49:36    Absolutely. No, I think, I think that’s absolutely right. Like we have this innate machinery and if we’re going to try to engineer it, the solution isn’t. So, so far we, we were, we were given two choices, right? One choice is to hand engineer. Those features either by using your imagination or possibly by looking at the engineered features from animals that’s choice one, and choice two is to learn them de Novo, each time you train the system and I’m arguing there’s, uh, a third, uh, route, which is you, you lay a foundation of the sort of useful prior, right. And maybe you get them through, uh, an optimization algorithm. And frankly, you know, just because evolution got them through an evolutionary algorithm doesn’t mean that that’s exactly the algorithm we need to use. So in fact, you know, evolution is a lousy algorithm, right? Because it doesn’t use a gradient evolution works because it operated over.  

Tony    00:50:40    I want to try to do a back of the envelope calculation on this, like tend to tend to the 30 individuals have contributed to, um, our, our genome, even with really fast GPU. It’s going to be a while before we can sort of simulate 10 to the 30, uh, organisms and use the outcome of that as the basis of, um, of, of our, of our system. So, no, I mean, the, the insight that we have was that gradients are really useful for finding your way around the high dimensional space, right? So if we’re going to engineer, if we’re going to recapitulate pollution, right, we’re probably going to have to do it using gradient. Then the idea is that we shouldn’t have to redo that each and every time we train the network, we should sort of figure out some kind of collection of foundational structures, right?  

Tony    00:51:38    Each time we train a network, uh, usually, and, you know, there’s been some recent work on, um, you know, not starting from scratch each time, especially with, uh, language networks, because there are basically, we have no choice, right? Cause you know, training, one of these causes the lights to dim and Boston for a couple of days or requires that much energy and compute. So it, it seems like at some point you can’t keep retraining from scratch each time. But I think that, that the lesson there is far more general and we have to sort of figure out how to reuse the training that we’ve done over and over again in, in a, in a useful way. So, you know, when, when I was a kid, we used to, um, uh, stump each other by asking, do you walk to school or carry your lunch? And it’s a false dichotomy, right? That the choice between learning and, um, exploding in eight structures is a false dichotomy answers. We should do both.  

Paul    00:52:41    Do you think though, we’re anywhere near understanding the capacity? So thinking about the high dimensional structure, right. Um, of, you know, 86 billion neurons, but do you think that we are near, uh, anywhere near appreciating, um, the actual heavy, heavy lifting that evolution has done to, uh, create that I particular high dimensional space, right? Where are these amazing general, uh, learning things? And it’s amazing the different types of things that we can learn and recombine, but on the other hand, constraint like Blake was saying is super important. And do we, do we appreciate that high dimensional structure enough? Or do we think, okay, it’s so high dimensional, it can just do anything. I think most people  

Blake    00:53:25    Recognize the importance of the high dimensional structures that have been optimized by evolution for the unique properties of human thought. And certainly anyone who’s tried optimizing neural networks for any lengthy period of time will appreciate just how amazing the product that evolution has produced is because you can get a lot of really cool, funky, amazing behaviors with gradient descent, but getting the unique mix that you see in animals in general, not just people is turning out to be remarkably difficult. And so, um, I think anyone who, who has spent some time with them and with these, these optimization procedures will respect evolution’s contribution quite a bit. Yeah.  

Doris    00:54:13    Yeah. And there’s, yeah, you guys have turned target how you can build it, use evolution to build the most powerful machines. Um, you know, as a neuroscientist, my interest is really to understand the brain and, um, like there’s different ways of understanding, right? There’s, there’s like this bad right now of, you know, um, regressing activity of neurons to units and deep networks. That’s one type of understanding, you know, I think the deep understanding it’s going to require understanding those structures, right. It’s sort of like, um, if you take a simpler example, you know, can calculate, what is the probability of getting two heads and a tail if you flip a coin three times, right? So you could figure that out by doing a Monte Carlo simulation, you could figure that out by writing out all the outcomes, right. HHT HTT so on, or you could figure that out by actually understanding the structure, the binomial distribution. I think all of us would agree that the last form of understanding is this real understanding and system really like just taking a neural network, like that’s not going to and regressing and saying, it explains what ever percent variance like that. That’s not totally satisfying. Right.  

Blake    00:55:19    I agree with that. And I just want to say, I think sometimes, um, there is an unfortunate tendency for people to think of the contribution that, uh, machine learning can make to neuroscience as being fully encapsulated by that approach that just regresses neural activity, uh, against deep neural networks. And I think that provides us a bit of understanding as Doris said, but in my mind that is only really effective and a tool for good as a tool for gaining understanding if you’re using it to answer other questions with the neural network. So simply showing that you have a network that you can regress well against the data is itself. Not necessarily that informative. It doesn’t tell you nothing, but it’s not necessarily that informative, but instead, what you want to do is you then want to use those models to, as it were, understand the distribution, and to try to think about the principles by which you can get models that are better fits to brains. Um, and, and it’s only by taking that principled approach and using these models as normative guides that we get to something like real understanding, simply doing the grip regressing itself is not, I agree with Doris sufficient for understanding, and it’s also not. And this is ultimately my point, the entirety of the program that, you know, are neural networks and machine learning and neuroscience have to offer neuroscientists.  

Doris    00:56:44    I think Tony’s essay also for me personally, like introduced another dimension of understanding, right? Understanding how the genome encodes these structures that enable learning, like, you know, and it sort of always like festered in the back of my mind. And I heard the statistic that the genome, you know, you can put it on a CD rom or something, and it seemed kind of incredible, but I never like Tony, like really worked out the implications of that. Right. Like you have to specify all of these learning rules in that, in that, um, CD. So that’s, um, yeah, like, like I feel like if we’re going to understand the brain, like we have to understand that question too. You know, like Mara had the famous three levels that this is, I feel like a fourth level. Like how does this computational structure, how do you actually wire it up with the small amount of information?  

Tony    00:57:30    Uh, in some sense, it’s I consider good news because it means that the best, there was one concern that I’ve always had is that the brain is just infinitely complicated. You know, there’s this bag of tricks idea that basically it’s just a collection of clutches. And although there’s clearly some of that going on, right. Clearly a lot of specialized adaptation, um, that you’ll only understand if you really, really dig, dig, dig, dig very deep. The overall description length of the entire circuit is just not that long. And you know, there’s an upper limit, which is the size of the genome, but it’s not being optimally used in some sense. And not all of the genome is used to specify the brain. So, you know, the difference between actually I did another calculation, the difference between a, um, human brain and a macaque brain actually does fit, um, and old school floppy did. So it’s, you know, of order one megabyte  

Paul    00:58:34    Doesn’t know what you’re talking about.  

Tony    00:58:39    Um, so, uh, you know, it’s, I think it’s good news that the things that make us special as humans, it’s really not that much right now. You know, one megabyte of this stuff could actually have a huge impact, but to figure out what that, uh, to write it down, it turns out probably not to be so hard.  

Paul    00:59:01    Well, this is why I brought up development earlier. And, um, I mean, I’m just unbiased because of my recent conversation with people like Robin, he’s a singer. Um, and well, other people I’ve had on the, on the podcast recently, Kevin Mitchell, for instance, um, that the, their argument is that we’re actually missing, like what is actually specified in the genome. Isn’t the rules for connections. It is the rules for development and that through the developmental process, that algorithm actually changes.  

Tony    00:59:30    Yeah. Yeah. That’s implicit and say, it’s not. So it’s clear that we will not like dig into the genome and suddenly find the part where you can unpack the matrix that reflects the co you know, the connector. I don’t, you know, not, not the recent movie, but the connectivity matrix among, among neurons, right? Like we’re not going to, ah, there, you know, let’s just do Jesus on this. No, it’s not. Um, it’s not simply a bunch of, of synaptic weights, even, probably in C elegans where you could actually lift all the weight, see elegance being a worm as 302 neurons and 7,000 synapses. Right. Even though the C elegans genome is a little smaller than ours, um, that kinda activity matrix for the, uh, circuit, the entire reign circuit of the worms C elegans would fit comfortably into the genome, but that’s still not how it’s done.  

Tony    01:00:30    Right. So developmental rules are, you know, interesting and complicated, but there are rules, right? So at no point would you expect to find a list of connections within, uh, within a genome. So I, I think, I think we’re all in agreement that the way you get from genomes to circuit is via interesting developmental rules. Um, whether that, whether understand, like, I think understanding those rules is fascinating in its own, right? Whether that will be the path to understanding, um, you know, the computational roles of neural circuits. I don’t know. I’m, I am getting increasingly interested in development in the hopes that maybe it will provide insight, but you know, there, there are possibly many ways of figuring out how to make a brain that, that compute or how, how computation in the brain is, is sort of instantiated into circuit.  

Paul    01:01:32    Tony, how has your paper aged? I had you on, I was like two years ago or something. I don’t remember. It was forever ago. Uh, but, um, yeah, what’d you do, would you have written anything differently in the paper at this point, or do you still stand by the original message and  

Tony    01:01:48    It’s sort of laid down the, um, certainly the path that my lab is going to be taking in this, in this domain. So, you know, that was, you know, it was an essay, it was full of, uh, ideas and observations, but not actual work, but for me, like the research program that, uh, is suggested by it is to figure out how to actually compress, uh, an artificial neural network wiring diagram into a quote unquote, a genome. And that when you ask, what am I thinking about that on a day-to-day basis, what I’m thinking about and you know, that that’s the nitty gritty of it and it’s been, um, it’s been a lot of fun. It is a lot of fun, um, to see how we can do that. So, um, but I think, I think there’s, you know, if I were to ask, I’ve been talking to actually Blake about this recently is to think a little bit harder about the role of evolution in all this. I think, um, you know, how to actually fit that in is like, I, I don’t have a clear idea yet, but in terms of a path forward, um, that that’s something that, that I’ve certainly been thinking a lot about.  

Paul    01:03:04    Do we understand evolution itself well, enough to think about those things?  

Blake    01:03:08    I feel like we understand the principles of evolution pretty well. You know, it’s, for me, the biological equivalent to Newtonian mechanics, it is the core insight that allowed us to build all the rest of the conceptual scaffolding and general approach to doing the science. And so, you know, I’m sure there are still tons of things that people are learning about evolutionary processes every day, but the fundamental mechanism is very clear and we can simulate it. We can show that you get all sorts of interesting things that, you know, are explain a variety of facets of biological life. And, um, so though there’s more to discover. I think if, if we don’t admit to knowing to understanding something about evolution, then I’m not sure what we do understand  

Tony    01:04:03    I’m with you there.  

Paul    01:04:05    All right. As, as organizers of the nicest, uh, conference this year, um, I will just put your feet to the fire. Where do you guys think complete speculation? Of course. Where do you think we are on the fad curve of what, uh, what is sometimes called neuro AI that has, well, go ahead,  

Tony    01:04:24    Hardly a fad. I would say that it’s just the opposite. It’s uh, after a neuro AI winter, we are experiencing, um, the, the neuro AI spring. So we’re returning to, uh, you know, we’re returning to our roots. So it’s, I think that it deserves, I think neuroscience deserves to be a part of AI and vice versa, and we’re just hopefully gonna kind of catalyze that, that return. Yeah.  

Blake    01:04:59    I, I agree with that. And I think the only caveat I’d add, and this is why sometimes you can talk about fads and I say this with all due respect to anyone in California listening or here on the call. Um, sometimes the, the fad machine that people are picking up on is not what’s going on in academia or even industrial research, but the fad machine, that is what comes out of Silicon valley, venture capital culture and stuff like that. And there, I think we probably have passed an inflection point. If you just look at the number of searches online for deep learning, it’s down from a few years ago. If you look at the extent to which people are throwing money at anyone who says they have a company that does deep learning that’s down from a few years ago. So there’s some business cycle fad that maybe is slightly on the wane, but I think the longterm business trend, and certainly as Tony articulated, the long-term scientific endeavor is not a fad has never been a fad and will continue to pick up pace as we start to figure more and more out.  

Doris    01:06:06    I completely agree. I mean, I feel like this, this neuro AI is as fundamental as, you know, physics or chemistry. It’s, you know, the study of intelligence perception, all of these. Um, I would think that some of us, you know, that’s what we care about. It’s, um, it’s so fundamental. Like it’s, you know, there there’s certainly like fads, I mean, in, in how you analyze data using neural networks and so on, that’s all true. But, um, yeah, the fundamental quest to understand intelligence that can be called a fed.  

Paul    01:06:36    All right. One last question. I know you guys have to go, uh, for each of you do the problems that you are working on. Do they feel bigger or do they feel smaller than when you began working on them or as, as you continue to work on them?  

Doris    01:06:51    For me, they definitely feel that there’s so much bigger. You know, when I first started recording from face cells, the question was like, what, what drives this space, Sally, versus the eyes, the shape of the face or side, we’ve pretty much figured that out, you know, the cells are driven by shape and appearance features. Um, so now we’re asking questions, like how does brain generate a conscious perceptive of face or can the brain, how do you imagine a face? Um, how do you, how do you learn a face in an unsupervised way it’s, um, holding a new realm of questions to,  

Tony    01:07:23    Yeah. Going to when I started graduate school, I had this fear that, um, I wouldn’t, I wouldn’t train up fast enough. Um, and that, by the time I sort of understood enough to do useful work, all the problems would be solved. So that turned out not to be quite, uh, so in that sense, the problem seem, uh, constantly to get bigger. I thought the problem was pretty small when I started. Uh, and, and, you know, I, I thought that it was kind of like training up as a physicist in the late in the mid twenties, right? Like, because there was this sudden moment where, you know, everything got figured out in quantum mechanics. And if you got your degree in, you know, 1929, you’d missed the boat. I figured that that’s how it was going to be, uh, turns out we haven’t, we haven’t quite hit that inflection point. So the problems remain as bigger, bigger than when I started graduates.  

Paul    01:08:26    Cool. Blake, I’d love to hear a dissenting voice here that they’re all the problems seem tiny. Now  

Blake    01:08:33    I’m afraid I can’t give you that kind of descent. I mean, what I’ll say though is, um, I think that as the field matures, uh, what’s interesting is that you get to the point where though the problems can seem bigger and are bigger. You feel, at least me, I feel a little bit more like I have some of the conceptual tools to tackle them. And so though it seems like there’s more work to do, and the problems are bigger. I don’t feel the same sense of like, well, how the hell are we going to do this at all that I felt maybe like, you know, 15 years ago when I was starting my, uh, graduate school, like that’s, that’s a radically different scenario that way, um, to feel like we actually have some of the conceptual and experimental tools necessary to tackle these problems that do indeed seem bigger to me now.  

Paul    01:09:33    Well, considering that a 99.999% of the organisms couldn’t be with us today because of that bastard Lee evolutionary optimization algorithm. I really appreciate, uh, this little sliver of humanity being with me. Thanks guys for joining me.  

Tony    01:09:48    Thank you for much.  

Paul    01:09:55    Brain inspired is a production of me and you had don’t do advertisements. You can support the show through Patrion for a trifling amount and get access to the full versions of all the episodes. Plus bonus episodes that focus more on the cultural side, but still have science go to brand inspired.co and find their red Patrion button there to get in touch with me, emailPaul@brandinspired.co. The music you hear is by the new year. Find them@thenewyear.net. Thank you for your support. See you next time.  

0:00 – Intro
4:16 – Tony Zador
5:38 – Doris Tsao
10:44 – Blake Richards
15:46 – Deductive, inductive, abductive inference
16:32 – NAISys
33:09 – Evolution, development, learning
38:23 – Learning: plasticity vs. dynamical structures
54:13 – Different kinds of understanding
1:03:05 – Do we understand evolution well enough?
1:04:03 – Neuro-AI fad?
1:06:26 – Are your problems bigger or smaller now?