Brain Inspired
Brain Inspired
BI 129 Patryk Laurent: Learning from the Real World
Loading
/

Support the show to get full episodes and join the Discord community.

Patryk and I discuss his wide-ranging background working in both the neuroscience and AI worlds, and his resultant perspective on what’s needed to move forward in AI, including some principles of brain processes that are more and less important. We also discuss his own work using some of those principles to help deep learning generalize to better capture how humans behave in and perceive the world.

Transcript

Patryk    00:00:03    I think GPT three or, or these large language models. Um, they are expensive toys. I think they’re expensive toys, fun, but maybe not of very much use, in my opinion, there’s a very rich amount of encoding and representation processing in a spiking system. That’s just not used right now. To what extent can AI research or machine learning research inform us about brains? I think it can really help us understand the challenges and the problems, brain space, what they help us reflect back on ourselves and well, what can humans or animals actually do? And why is it so hard for these machines to do it? That is super valuable. In my opinion,  

Speaker 3    00:00:54    This is brain inspired.  

Paul    00:01:08    Hey everyone, it’s Paul. My guest today is a friend of mine, actually, Patrick Loren. I met Patrick in graduate school and always admired his broad perspective and his depth of knowledge on a variety of topics. And as you’ll hear that depth of knowledge came from a diverse background and his neuroscience graduate school time was just one stop along what has continued to be a diverse, uh, career path that he has taken. So I thought it would be interesting just to have him on and probe his broad perspective on neuro AI. And as you’ll hear, he’s worked on just a variety of projects also in industry. So this is more kind of a casual conversation, but we do talk about some of the work that he published with his team using deep learning methods, uh, with some neuroscience inspiration to help models, better generalize in environments that are constantly changing. So this conversation didn’t have a specific focus like most episodes do. And I appreciate Patrick being willing to go down that road with me and we do jump around plenty, but I hope that you take away some helpful insights. I linked to Patrick’s information and the paper that we discuss in the show notes at brain inspired.co/podcast/ 129. Thanks for listening. Enjoy. So I’ve known you Patrick, uh, for, for, I don’t know if many is the right word, but, uh, since graduate school, what was that? 50, 60 years ago?  

Paul    00:02:33    I, uh, I figure there are two types of graduate students. Uh, those that come in super green like me, uh, and you know, this is their first time learning. Some of them, you know, the materials and without a, uh, necessarily advantageous background. And then I always admired you because you’re the other type of graduate student, uh, who seems to already have their act together who, uh, seems like this is step number 12 in their trajectory. Right. Uh, because what I, what I didn’t appreciate is how much, uh, software development and machine learning background that you had coming into a graduate school. And of course you were learning new things. How would you summarize your, uh, graduate school and your academic experience from graduate school through post-doc? I know that’s a large question, but I know that you also, you faced some challenges along the way as well as well.  

Patryk    00:03:31    Sure, absolutely. Sure. Um, yes, I, I think, I think I did come in the door with certain ideas because I had a fairly good undergraduate research experience. Um, that started kind of in kind of in my first year, actually.  

Paul    00:03:47    Well, that’s when you got into neural networks or when you started really getting it, it was actually back in high school. Right. I came across  

Patryk    00:03:55    High school was when I first started reading about them for some, yeah, you’re right. And in high school, I first started reading about them for some, uh, science fair project type stuff and some hobby projects. And, uh, but it was only in, in undergraduate that I joined a lab that was doing spiking neural networks research, um, to understand hippocampus function. And that, that came with doing a class on machine learning. Essentially, we didn’t really use that term as much, but it was the theory of neural networks and applications of neural networks to certain kinds of problems. So that was the start of my, uh, of my exposure and then right. Graduate school. Yes. I did also learn a lot there. For example, I had not been exposed to reinforcement learning until graduate school. And so that was, that was cool. Um, so learned more methods and techniques did essentially, and, and learned about ways other, other brain regions and how thinking more on the brain scale than on a brain region scale. I think that was a big part of my graduate school experience, right?  

Paul    00:05:07    Your development as a scientist,  

Patryk    00:05:09    A lot of development as a scientist, for sure. Um, learning to enjoy the grant writing process, which fortunately I never, uh, ended up submitting many grants and getting rejected, right. That therefore I didn’t get all the rejections. So in my mind writing grants is still a positive experience. I don’t know about,  

Paul    00:05:27    Oh, good. This one’s going to get a lot of funding. Yeah.  

Patryk    00:05:32    Yeah. So I don’t know. I think there are definitely challenges with getting, getting your research program going and doing all the networking and socializing of your ideas. Um, that’s, there’s a whole other part of the science process that I did not recognize at the time that is important to also for your career development. Um, of course, you know, had awesome chats with people, you, and others and, and stuff, but, um, there’s always more, you can do there too, other than just, you know, designing hypotheses and, and evaluating them. There’s a lot more to science life than just that. So learning that was really good as well.  

Paul    00:06:13    And then you went on after you postdoc to work for multiple companies. And, um, so, you know, you were able to use that. So first of all, uh, you, you were like always into computers, right? So you were an early Mac what’s what was the early Mac, uh, system that you had that you are fond of? I don’t even Commodore 64. Is that a word  

Patryk    00:06:37    That is a food? That is a thing. Yeah. That’s,  the Commodore 64 was entree that’s right. And that, and then, uh, old IBM machines. Yeah. So from a very young age, I started working with computers, but I think that drove me to, to learn what else was out there. Right. And I think that’s how I got into sort of neural networks and machine learning. Right. It’s not just about programming, there’s other stuff out there.  

Paul    00:06:59    Yes. Yeah. I feel like that was cheating. I feel like people like you who were into computers and programming, uh, like I’m, I’m trying to get my kids a little, there are some games where it teaches you program, like sort of the tenants of programming. Uh, they’re not very fun though. And, um, and I don’t even, I don’t even enjoy playing them, you know? So when they ask me to play them, uh, sometimes I, I, uh, I pass, what, what was your, what was your first job ever? Do you remember? Of course you remember like making money?  

Patryk    00:07:29    I think my first job ever was, I think it was in high school. Well, actually unofficially, I had a, I did have an unofficial job, which was my dad’s company, my dad’s office, where he worked, they had a large number of computers. And so I would often go in and like either update them or, you know, upgrade their operating system, or if they needed a little repair, I would do kind of stuff. I would do things like this. Right. So that was my unofficial job. But my first official job was, uh, uh, actually assembling, making, working computers out of broken, broken computers. Right. So I, I S you know, here’s a room full of broken computers. Can you get all the working parts and assemble functioning computers, the best functioning computers you could out of those, that was my first, yeah.  

Paul    00:08:17    To highlight the differences in our backgrounds. My first job unofficially was a little lawn mowing company in our neighborhood growing up. But then, but then my first official job of course, was bagging groceries, a little different. I was assembling the groceries, uh, into the bags,  

Patryk    00:08:32    Um, optimizing their fit into the bags.  

Paul    00:08:35    I’ve noticed that that art is lost on grocery baggers these days. But, but you, you, uh, my point is that you have had this diverse background, um, w you know, with software development, getting interested in neural networks, getting, going into neuroscience since then working, uh, for various, uh, diverse companies with all sorts of different projects. Is that diversity something that someone should strive for? Do you think that it, uh, how do you think it benefited you?  

Patryk    00:09:05    I think having the diversity of experience. So I, I have worked in publishing companies, healthcare companies, robotics companies, home. I stay at home and consumer electronics companies. And then most recently, um, my work has been split across real estate finance and other publishing company and education. And, uh, I guess those are the main three sectors. So it is a quite diverse, diverse sector. I think what I’ve taken away from that diversity is how important it is to understand or formulate the problem that needs to be solved. If you can. It’s really, really important to, to be able to clearly express what is the problem.  

Paul    00:09:57    That’s more of an industry skill, isn’t it that I’m trying to solve problems?  

Patryk    00:10:02    It could be, it could be. I think it’s, it’s also relevant for academic and research settings, because it’s a more than formulating by hypothesis. It’s understanding what is the system or framework in which that hypothesis is being asked. Right. You can always make hypothesis, but how’s that going to inform you about the problem of say understanding X, right. And I guess we’ll talk about the problem of understanding what the brain is doing for example, or what intelligence is I think, or consciousness is even one that I probably don’t want to get into, but Formulating that formulating what it is you’re talking about, or, or what it is you really want to learn, I think is, is, is key.  

Paul    00:10:50    How do you think it’s hindered you at all? Because some might look at that track record that trajectory and think, well, it’s too scattered. It’s too all over the place.  

Patryk    00:10:58    I totally expect that. And I, that’s something I’ve encountered throughout my career, even as an undergraduate, I did have that cross interdisciplinary, major cognitive science that had just been introduced at my university. And, uh, I, my academic advisor would just sort of stare at me across the desk and be like, why are you taking all these different random classes? So I think if you’re going to be interdisciplinary, I’m going to be comfortable with jumping between fields. Yeah. You’ll benefit from it because you’ll get a broader outlook, you’ll transfer knowledge across these fields and see solutions that others who’ve been in that field might not see. I definitely have seen, you know, that’s definitely happened at the same time. Yeah. I’m sure. I mean, there are people who, who, you know, talk to me and they’re like, oh, you actually code, you know, for example, you actually write, write code. And I’m like, I write a code every day. Like, but they, they, they can’t fathom it because their view of me is, is quite different based on what they’ve seen. I see an ID, for example. Right. Yeah.  

Paul    00:12:00    Aside from others’ opinions of you, do you think that has hindered you in other ways that, um, that maybe you have not been able to go as deep on some things as you’d like, and, or focus for longer periods on very specific projects? I’m sorry, I’m asking these things. That’s a large part of the brain inspired, uh, audience is actually, you know, either thinking about going to grad school and, or, um, like retired folks or people who have been working their whole lives and are just have been peripherally, interested in this and are getting into it more and thinking about how they could get it, get into it more, et cetera.  

Patryk    00:12:35    Yes. Um, you know, surely being able to focus deeply on a particular subject exclusively has tremendous value. And actually that was probably the greatest thing about the PhD program is that you had more or less that ability to focus. Um, and so I really do appreciate that and you can get lots of deep insights by doing that, but I think you can also get insights from that broader outlook and that cross-pollination kind of cross field approach. You can get insights from both. This is what I’m saying. And so what is it? There’s a T-shaped tools. Have you heard this term right? There they go very deep in one thing, they’re an expert in one thing. And then they have like sort of a broader experience. Um, there are people who are just extreme, really focused experts, right. And then I think what has happened to me, you know, just, you know, just circumstance by circumstance, I become a comb shape person. Right. I’ve got several deep areas. Um,  

Paul    00:13:33    That’s the S that’s the thing to aspire to. Right. I feel like that that is a goal  

Patryk    00:13:37    Perhaps, perhaps. I mean, I think that depending on your personality too, I shaped could be very, very good for, for a person, um, to have that exclusive focus. And then you become the person that I come to and say, Hey, you know, I know you know about this, you know, can you help me? Right. So having that mixture is the, is one of the greatest things. That’s the great thing about working at a company with a good, a good crew of talent is that they will have both. You’ll have a lot of, a lot of, I shaped people. Who’d love nothing better than to focus on something. And then as long as you know where they are and you know, that you can find them, uh, you can leverage them, you know, and, and they can help you. And then, uh, if you’re going to be like me, you’re going to have to understand the limits of your own capabilities, knowledge, et cetera. Right. And then, but I do appreciate sometimes just digging in and focusing on something intensely,  

Paul    00:14:26    Is there room for a eligible, scribbled, shape people, because that’s what I think that’s what I, when I look in the mirror, that’s what I see is just a big scribble, you know,  

Patryk    00:14:38    One of well, yes, I think so. I’ll, I’ll tell you what I think that role is. I think it’s, it’s this role, there’s a, there’s a researcher named Alan Kay. And he, he uses, he calls himself in some cases, the agitator, like chief agitator officer or things like this, right. Someone who’s going to inspire people to do what they, what they should know they should be doing. Right. You know, to do better. And I’ve met several of these throughout my career. Agitators are, are, are grapes. Right? Um, they, they connect distant dots together for you, help you see the bigger picture. I think, I think that’s a supremely useful role. Um, so that’s my, my answer for the scribbling.  

Paul    00:15:24    That’s a very optimistic answer. Yeah. And one that, uh, I think I fall short up, but that’s okay. So, so like I said, so we’ve been, uh, friends, we, we weren’t, um, tied at the hip, uh, in graduate school and or after, but we kind of regularly communicate, um, one of the nice things I think, uh, about having well in academia at least. And I don’t know if, you know, you could speak to this in industry as well is that you can know someone pretty well and have known them throughout the years. And yet there’s an endless supply of things to learn from that person. So there’s like, I can always go back to the well with you and find out something new. Right. Um, which is a Testament to human intelligence. What does intelligence mean to you?  

Patryk    00:16:16    I think one thing that has struck me is intelligence and artificial intelligence more specifically, um, a lot of people don’t like to define that. So I’m, I’m curious if, if people have ref refuse to answer this on your, on your podcast or if you’ve asked others, um,  

Paul    00:16:35    Um, I mean, there, it’s usually something about, uh, being able to adapt in changing environments. Um, and then there’s often a generalization component tagged on, um,  

Patryk    00:16:47    Uh, I would exactly say it’s very good. Generalization is what I would consider intelligence, but is that just a synonym or is that a definition? Right. So generalization, what does that mean?  

Paul    00:16:56    Right. What’s w what does that’s right. That’s what I was going to ask. What does that mean? Yeah.  

Patryk    00:17:00    Yep. So, so I think it means, it means flexibility. It’s things that look like creativity or novelty, right. You might be solving the same problem in an, in a intelligent way, or you may be solving a new problem. It could be an old problem or new problem, but you might take a novel approach to it. Right. Um, I think, I think how, how do people measure intelligence, right. They, they do things like test your ability to do analogies. Right. I think an analogy is a form of generalization, right? You’re applying some, some, you know, features or attributes to them to a new condition. Um, abstraction is another way people measure intelligence, right. I think analogy, abstraction, prediction, inference, all of those things that we do intelligence is the ability that underlies, that enables us to do those things. Right. So we can indirectly measure intelligence by someone’s ability to do those things, not to say that someone who can’t make inferences or can’t do an all analogies isn’t intelligence. I don’t want to say that. Right. Like, you may have never heard of an analogy, but you may have intelligence, which is the ability to do that. If you understood what two colons and a.here would mean. Right. Um, so, so I think that’s it, I think intelligence is that ability to generalize and that leads to a whole bunch of capabilities.  

Paul    00:18:23    Can you articulate how your view of intelligence has changed throughout your career and life? Um, because you know, you’ve also worked on robotics and, and I, you know, because my, my own outlook has sort of, you know, constantly shifting and I don’t think I could articulate how I thought about it 10 years ago. I’m not sure if you can.  

Patryk    00:18:47    I’m thinking back to early graduate school, reinforcement learning work, where I saw of agents, I was training that I was working with they’re training themselves. Right. Just to get reward. I saw behaviors that looked really intelligent sometimes. Right. Sort of, um, it looked like they had forethought, it looked like they were right there, there were things that appeared to be intelligence. Um, and what I say, they actually are intelligent, you know, in that, in that context with respect to their environment. Yes. They seemed intelligent. Right. Okay. They’re caveats. Right. So, so maybe intelligence is relative to your environment. Um, I think those were fairly simple. I think what I appreciate now is the sheer amount of computational power and architecture that would be required to make something intelligent with respect to the real world, right. To make something intelligent with respect to the real world will require a lot of compute power. And I did not really appreciate that. I think back then, I thought you could just slap the reinforcement, learning on a camera and you slap that robot onto a camera and it would just work. Right. Um, and I think I’m now realizing that that’s, uh, th there’s going to be a lot more to it than that.  

Paul    00:20:07    Well, you’ve worked on robotics. So, uh, is that what gave you the, the appreciation that it, that it’s beyond slapping the reinforcement learning and a camera on a robot?  

Patryk    00:20:18    Yes. I would say quite, quite directly. So I would say that that basically spending four years after my post-doc in a robotics company gave me a lot of that, uh, appreciation and understanding of, of how hard of a problem it is to really solve well.  

Paul    00:20:35    But I know that what you’ve done is taking your neuroscience knowledge, um, from your years in academia and made the case well, and incorporated that knowledge into the robotic systems that you guys were building and that, that you think that there are principles from the brain that are important right. To, um, incorporate, uh, I know, you know, like for instance, you’ve worked on spiking neural networks, um, throughout the years, actually, and yet in a, in a conversation you have, you and I have had previously, um, you know, you’ve said that we don’t need to use everything in the brain, uh, and that there are some core principles that you think, uh, that are needed to, uh, implement, uh, how many, how many core principles are there and how do we actually discover those principles?  

Patryk    00:21:28    Um, I think so I think building brains, like building robotic brains things, basically devices that allow robots to navigate the world successfully interact with it. Right. You know, proceed and act. Um, building brains is a great way, um, to discover those principles and, and sort of in the Fineman sense of what, what I can not build, I cannot understand. Right. Just modeling is ultimately what we’re talking about here. Once you try to build those things as even the, even the simplest robot that needs to be autonomous, then you start to think back to what did, what was it about brains that seem to be interesting, right. That, that seemed like they would give them the capabilities to do this. Right. And so I could hear echoes of our professors and pit, for example, right. In our program, um, saying things like one, one said that cortex is always predicting, what’s going to happen next, right.  

Patryk    00:22:32    There was this one thing that just echoed in my mind and you realize that, okay, that, that sense of constantly predicting what happens next could be very valuable, um, as a sort of computational driving force for, for the things to learn good representations. Right. So I think, I think you have to be knowledgeable deeply about the neuroscience of what us go like what’s going on. And so having done that coursework and having been exposed to what brains seem to be doing is helpful, but also facing the problems and challenges and realizing that, you know, how are we going to design a system? That’s going to understand the world around it, what we, we can’t use labels. This isn’t a supervised learning problem. And to re to recognize, well, actually using the future as the label for the present, or for example, right. Or is, is a great way to drive a system to, you know, if a system is driven to do that and architected appropriately, you know, it could come up with certain things. So I think you have to have both, right. And you have to have the neuroscience knowledge and the appreciation for what brains are doing.  

Paul    00:23:37    Should we talk about that? So you just mentioned some of the things that went into, uh, a paper that, um, you and other published oh, six years ago now, I suppose that used a, an unsupervised and supervised learning algorithm. Um, should we talk about that work just to kind of lay it on the table?  

Patryk    00:24:00    We could, we could do that. We could,  

Paul    00:24:02    Uh, because it’s, it’s important to be able to. So what you were just talking about, how you need to predict, um, in an unsupervised way, you guys, and you can elaborate more on this. You guys built a system that would essentially predict the next time step, and then you made a multiple hierarchical layers, and then you could track a green basketball and look at stop signs, uh, in various lighting conditions with various challenges visually, but it was a visual system. Do I have, is that an accurate description? And then, you know, what was I missing there  

Patryk    00:24:33    That’s highly accurate. There were situations where we’d see the visual systems of our robots fail on a very regular basis. And we were using sort of state-of-the-art convolutional networks. We were using other computer vision algorithms, like using state-of-the-art stuff, but we would, despite, you know, tremendous amounts of training or very clever parameter settings, um, from one day to the next, the system might fail to track an object. And that type of failure, um, was, uh, obviously disheartening for a lot of people in the, on the team and stuff. And, and, but coming up with an understanding of why that was the case was, was key. And so under so-so the fact that, you know, the color of an object, so our brains do tremendous amount of things for us. Um, I don’t know if you’re familiar with Moravec’s paradox, there’s this paradox Probably,  

Patryk    00:25:36    But right. The idea is that it’s very easy to make computers do very sophisticated things like chess. Right. But we sort of ignore the things like what a one-year-old or two-year-old does. Right. Just being able to see an object on a table. Right. So things like this, and those are, are, those are also amazing things, but we just don’t think of them as amazing. We just tend to think, oh, that was a green basket. That was a green basketball. Um, we didn’t, we didn’t realize that we’re constantly compensating for ambient lighting and colors and shadows and, you know, blurry effects. And so, so there’s always like, there’s a lot of adaptation going on, even at that perceptual level. Right. A lot of intelligence, you might say generalization going on, even there. And so we try to capture that using three principles from that we thought were bringing inspired principles, right. This predicting the, predicting the future, doing this very compressed, uh, representation at every part of the system and having massive feedback throughout the system. Right. Um, massive to even the input layer. So,  

Paul    00:26:45    But so the, the architecture that you guys use is, is similar to multiple other architectures. First of all, you mentioned convolutional neural networks, but at least on the paper, if I remember correctly, you’re using kind of simple three layer multi-layer perceptrons right. And then, okay, so there’s that component. So you’re using this quote unquote simple MLP, but then you are TA and the job of the final layer, the third layer is to predict the next time step incoming. Right. But then sort of like an element, a recurrent neural network, you take output from the middle layer, the hidden layer, which is a smaller layer. So it’s compressed actually, which is like an autoencoder, but you take that compressed representation and you feed it back to the earlier layers. So then I’m describing one little unit here, and I know it’s more complicated than that, but then you had a bunch of units, um, that comprised kind of a layer and you put lateral connections between those units. And then there were hierarchies, hierarchical levels of these little units, and they’re all kind of connected together. Um, so there, so it’s interesting that you, you, you use a lot of different similar, uh, architectures like a vanilla recurrent neural network, uh, of vanilla multilayer perceptron and then what’s now known as PR from principles of what is an autoencoder. Yes,  

Patryk    00:28:10    I would, I would actually call them the term we used was future and coders. So the right, because that, that was the key aspect that they had to predict the future. They couldn’t just re and code what, what was happening.  

Paul    00:28:22    Right. So the, so an auto encoders job is to regenerate the original data. Whereas in this case, the job of the MLP was to predict the next time step in an unsupervised fashion.  

Patryk    00:28:32    Right. Right. Yeah. So it’s almost like it’s almost self supervised. Um, and the, the interesting thing too, is it might almost be called a heterarchy. I mean, yes. So there is a certain order, so it is somewhat hierarchical, but because all of them can communicate with each other. Right. All of them are trying to use, not just their own past to predict what they, what they think would be coming next. They also use their neighboring, uh, their neighboring processing units to try to help them predict. So they essentially become sensitive, not just to their own input, but distant input, potentially through multiple connections, um, to, to better process. And the why this is important, um, is because if you have a scene and you’re tracking them and you’re looking at an object, it can be very valuable to integrate information across the whole scene to interpret, say the color of that object. Right? So for example, if there’s a tree casting, a shadow onto the object, you know, that tree may not even be in your field of view, but you might see the shadow kind of coming across the ground towards that and overlapping with that object. Um, the only hope you have of, of compensating for the shift and color of that object is to be aware of the shadow over there. Right. And that needs to somehow impinge on the processing. So that was the, the impetus behind the architecture you’ve described pretty, quite accurately.  

Paul    00:29:59    So this is a little nitpicky, um, and you know, haven’t studied the paper that much in-depth, but you know, the brain is modular to some degree, right. Uh, it’s not just a mass action, but the, all of the units were kind of connected equally, right. Everyone got the same vote it’s um, so then you have this almost, uh, homogeneous collection of these units, and it seems so, so of course the brain is highly, highly recurrent, uh, at every, you know, place, but then some recurrence probably is stronger than others. Some counts more. And I suppose this is what happens when you train that network, uh, that you trained where some connections of course become more important than others, but it seems like so many different inputs could actually overwhelm the system. Did you guys face any problems with, uh, it’s actually too much recurrent information?  

Patryk    00:30:57    So I would say that this is a very, very fair point. I would say that this architecture that we presented is version one version, 0.1, maybe you call it,  

Paul    00:31:11    Come on. Why isn’t it? Why isn’t it like a whole brain man?  

Patryk    00:31:14    But I, I think to, to your point is very fair. Um, there was a little bit of exploration. This, I should say, this was a one-year project. It was a one-year project that was funded by DARPA act after which we had to end it. Um, and so, yeah, so we, we actually had to get outside funding in order to make this project happen, um, because it was sort of outside of it was getting out of scope for our own company. So, so questions about how much recurrence and, and if there was a, there should have been a gradient, like a neurodevelopmentally set up gradient of, of recurrence, um, prior. And would that have helped, um, you know, quite potentially, I think a lot of exploration needs to be done with this type of architecture. Um, I would say, unfortunately, it’s very hard to run this type of model, implement it in something like TensorFlow or PI torch, one of the more common, uh, because of it, the style of its recurrent connections, it’s quite challenging to implement. Um,  

Paul    00:32:15    Not just, not just because you’d have to hand a engineer, the different architectures and they just, they aren’t built in as features in the, in Python or TensorFlow.  

Patryk    00:32:24    It’s not just that. Yeah. The libraries don’t contain these. Um, although they probably, I think it would be a matter of connecting these units all together would be a bit of a challenge. Um, um, one, one other thing I would say is that despite you know, us not having tremendous amount of time to fine tune or optimize the architecture, the T to get to your point of the training would be setting things up. One thing we did notice was that we had this large sort of data set of video continuous video, and on which it was trained and we were training it and it just kept improving, always improving. Um, so a lot of, a lot of, uh, deep networks, they they’re they’re, they kind of saturate, you know, they need more data. They’re very data hungry in the sense that they just need more data in order to get him any improvements. This architecture didn’t face that, um, the longer we trained it, the better it got, it never was, you know, it never hit an asymptote it or never asked them to do it. Right. It just kept getting better. Um, which was fascinating to me. We would check it overnight, we’d check it the next day. It was trained it for weeks and the  

Paul    00:33:31    Wouldn’t that lead to perfection though. And it, but it was still making errors,  

Patryk    00:33:35    Right. It still wasn’t perfect at predicting its own future, but it just kept getting better with the same dataset, which was really, uh, I, I’m not even sure what the implications of that are. Right,  

Paul    00:33:48    Right. Just like you and me, we just keep getting better and better. That’s what the implication is. I hadn’t thought of this, but, um, was that, so I guess you started working on that probably in 2014 or 2015 or something. And these days, it’s all about unsupervised learning. You know, when you, uh, you hear like Yan, Lacoon talk about the future is, is self supervised slash on supervised learning where you guys ahead of the curve. They’re not that unsupervised learning hasn’t always been around, but, uh, it just hasn’t been as popular because, uh, supervised learning was such a success for those, uh, few years after 2012.  

Patryk    00:34:27    Yeah. I mean, I it’s, I think we were, yeah, perhaps we were ahead of the curve. I think it was definitely a different path. We were definitely on a different path, um, from, from what most people were on to, because of the problems we were facing or the problems we chose, that the fact that we chose to recognize the problem as one of not, you need more data, right. That the majority of the field just said, well, you just need more training data. They’re still saying that to some extent today. Whereas we realized it’s not about more data, it’s about having a architecture, right. The algorithm that is suited for the problem you’re trying to solve. Right. It has to be suitable.  

Paul    00:35:09    You also this kind of switching topics here, but thinking about modern networks and the continued rise of, um, expansion of these deep learning models, et cetera. Um, I know that you, you have interest in language at least, you know, you did back in, uh, in graduate school. Um, w what are your thoughts on these recent transformers, these language models? It’s not something that we talk about much on the podcast. Um, I have a few kind of episodes kind of geared up, um, to talk about them more, but, uh, how do you view? Well, I mean, there’s, I know there’s a bunch of different language models, but how do you view, like the transformer kinds of models?  

Patryk    00:35:50    The transformers to me are very interesting. I view them as a formalism formalization, perhaps as the word of what recurrent networks do. Um, but in a way that makes them suitable for GPU acceleration and more precision, right. Less, less of the vanishing gradient due to time, right. They they’re able to go back and write, instead of hoping that the recurrent system learns to preserve the meaningful information, to make the decision at the end, right. Instead of just hoping that that sort of churning process preserves what you need, the transformer architectures makes it more explicit, says, if you, if this information is beneficial for your output at this time, you can actually go and get it. You can retrieve it from that moment in your history. Right? And so, to me, that’s a really clever design and in my hands using them on a few problems, transformers have been very, very successful. They’ve worked very well. Um, of course I’m using much smaller transformers than these big language models and, and, you know, smaller data sets. But, um, but they’re, they’re a useful tool.  

Paul    00:37:01    So you use the term of formalism, uh, to describe what was happening with transformers. And I, that immediately made me think of the word syntax. Um, and, but our semantics still missing. And I I’m really naive about, uh, language models these days, but, you know, there’s always the problem of meaning and semantics, right? So is it all syntax or, uh, or are there semantics, is there meaning in there?  

Patryk    00:37:28    Right. So I think GPT three or, or these large language models, um, they are expensive toys. I think they’re expensive toys, um, fun, but maybe not of very much use in my opinion use for what use for, for any. So I think people have been interested in, in using them in various domains, helping them analyze, summarize text, for example, things like this, um, or using them in the context of a, a customer service system or an education system for kids, or, you know, things like this. I, I think that they don’t know what they’re talking about. Right. And that’s why you see, you can always cherry pick great looking examples, but you can also see very bad failures of, of, of these things. And, um, it’s not that hard to get bad failures. Some people make it a hobby to just  

Paul    00:38:27    Show how they fail aggravators.  

Patryk    00:38:29    Yep. Yep. So I think, yes, what is, what is missing is the intelligence, right? The, the good generalization and probably the way to do it is, is so, so I used to believe that you could learn language by listening to the radio. You could learn the language by the sense of the radio, right? That was, you know, the old idea that just by looking at strings of language, taking it, taking it in, right. You should be able to figure out what it is. And then eventually I realized, you know, you might not, um, you might not be able to do it until you actually, so suppose the language is a scream about a coffee shop, right. About activity in a coffee shop. There’s a barista people ordering things, you know, you’re getting served things. Maybe you even have the audio of what’s happening. Some stirring sound, some poring sounds. Right. Um, I think I actually, to the point, I think stirring sounds and pouring sounds or seeing what’s going on is what helps you really be sure of the meaning, right? Until you, until you get that real-world input that accompanies the language string a stream, you might always be a bit unsure about what, you know, what they’re really talking about.  

Paul    00:39:44    Well, I thought you were gonna, I thought you were gonna go action that you have to speak. You have to generate.  

Patryk    00:39:50    I, I’m not, I’m not necessarily ready to jump there yet to say that you would have to, but I would think you could sit in the coffee shop. If you sign on the coffee shop, that’s much better than just hearing the language stream about what’s going on with the coffee shop.  

Paul    00:40:03    I forgot, uh, Sarah McLaughlin playing in the background as well. If you’re in a coffee shop, that’s probably, that’s a little outdated,  

Patryk    00:40:10    But I’ve seen her. I’ve seen her perform twice. Oh,  

Paul    00:40:14    I didn’t know. You’re a fan of Sarah McLaughlin. Are you a fan?  

Patryk    00:40:19    I actually am a fan. Yeah.  

Paul    00:40:23    We can edit that  

Patryk    00:40:24    Through my wife. I learned about, I learned about her through my wife and  

Paul    00:40:29    Oh, I didn’t know she was still going. Um, anyway, there are people like Chris Manning and others who are analyzing, you know, the properties of these language models and finding, um, for lack of a better term human-like structure. So the, the question is, are they teaching us anything about our own intelligence, right. Can, and this goes back to the, your, your model that we talked about also did, you know, using that AI, uh, approach, you can answer it for both. Right? So, uh, do transformers and modern language models have anything to teach us about our own intelligence?  

Patryk    00:41:09    Great question. I think you could certainly, if you have a model that does something and you look inside and you find certain patterns of activation, perhaps I could say, it’s, it’s likely you will find those patterns in, in the brain as well. Right. Um,  

Paul    00:41:28    Or, or there’s multiple ways to skin a cat multiple realizability right. It might.  

Patryk    00:41:34    Right. So it’s not to say it is. Yeah. It’s not to say it is the way that this is solved. Right. But it’s a way it may be solved. And I think the brain does things with diverse, diverse manner of ways. Right. There’s no, I don’t think there are single mechanisms to achieve something or there are probably multiple mechanisms to achieve something. Right. And so perhaps, you know, you might look at what a transformer’s doing and you might say, oh, you know, I found something in Warner, a area that’s similar or some higher, you know, some higher language level area that’s, that’s similar. So I think, you know, is that interesting? You know, perhaps it is, um, I guess what’s perhaps more interesting is understanding where those models fail and where brains succeed and trying to get insight there. Right. Um, so, so yeah, so maybe that’s, that’s all I can say is just, it’s just, you can always look into a model and then see if you find something in the brain that’s like it, I just don’t know if that really, how helpful is that? I don’t know how helpful that is. So inter but your question of, to what extent can AI research or machine learning research inform us about brains? I think it can really help us understand the challenges and the problems brain space. Right. It can make those very clear. Um,  

Paul    00:42:51    I was going to ask you how you, how you thought thus far, how AI has most contributed to helping us understand our own intelligence. And is that the, is that your answer that, um, the challenges  

Patryk    00:43:02    That, that would be the, yeah, that would be my answer to that, that they help us know what they help us reflect back on ourselves and wow. What can, you know, what can humans or animals actually do? And why is it so hard for these machines to do it right? Like that is super valuable in life. In my opinion,  

Paul    00:43:21    Let’s reverse that I’ve had this recurring thought lately. And it’s, I know it’s not an original thought, but, uh, when we think of building AGI or human level, uh, intelligence, we often, while I assume that we often mean a system that, uh, doesn’t make any mistakes and yet humans, we have so many Follies we’re so imperfect. Um, can, how do you think about that thing? Well, do you agree with that? First of all, because the picture of a perfection, right. An AI that’s perfect. Well, now it’s human level we’ll know because humans are way imperfect. Right. So can we use our own Follies to help inform, uh, AI as well?  

Patryk    00:44:05    Yes, probably. I, I think that’s really insightful way to think about it. Um, because when one talks about a GI,  

Paul    00:44:18    Not that we need to, I don’t know what that means either, but yeah. Let’s say  

Patryk    00:44:21    The level of performance, right? Like order exceeding human level performance. Right. Um, you know, is right. Is human level the goal or is it just human, like performance? That’s the goal, right. That’s one thing. Um, and I’m thinking of these, uh, these claims of, uh, radiology, radiology, right. Scanning things, right. Where they exceed human level perform  

Paul    00:44:47    And they see a tumor well, where a human can’t see it a tumor. Is that what you’re? Well, they had diagnosed better than a human. Yeah.  

Patryk    00:44:53    Right, right. Exactly. Exactly. I think that’s, you know, that’s really impressive. Of course there are some humans that will probably be able to do that. And there’s the term idiot savant. I don’t know if there’s some more politically correct term for it now. Right. But there are these days, there must be something, but there are always people who can do things they’re outliers. Right. You can do things amazingly well. And I view a lot of these machine learning systems as kind of maybe becoming like those types of, of, uh, you know, outline, outlier people where you’ve got,  

Paul    00:45:23    It’s probably, you probably just drop idiot. And just, I think we just say savant now, maybe, maybe, I don’t know.  

Patryk    00:45:30    Yeah. Yeah.  would be like a wise person, you know, someone who’s  

Paul    00:45:35    An idiot. So that would be a very specialized person.  

Patryk    00:45:38    Very special. Right. Exactly. But, but, so yeah, I know we’ll have to figure out what to do, but okay. But, right. So, so do, do, does human level performance matter, does exceeding or approaching human level performance matter? Or do we just want human like performance right. Where, you know, it may not, it may be prone to error, but it still recovers really well. Right. You know, like people make mistakes all the time, but they can recover very quickly. Um, so take, take the self-driving cars and crashing into, uh, a police car with sirens or, uh, uh, a fire engine, right. Parked fire engine with siren lights going on. Right. Humans don’t crash into those things. Right. Uh, so who cares about your level of performance? I think the goal is to get something that makes the kinds of errors perhaps, or doesn’t, or recover from the kinds of errors in a way that a human mind. Right.  

Paul    00:46:39    So the car example is a good example, right. Because the whole point of having self-driving cars is that it would not make the same sort of dumb errors that humans make and contribute to fatalities. Um, and, and the danger of driving. Right. So  

Patryk    00:46:55    Is that, is that really  

Paul    00:46:57    Isn’t that one reason what was the, what would be the other reason the traffic flow,  

Patryk    00:47:02    Right. Maybe traffic flow or, you know, you can just gain a bit of a few hours in your day. Right. But,  

Paul    00:47:08    But, but, but we would consider it a failure, right. If self-driving cars came along and we had the same fatality rate as before self-driving cars, wouldn’t that be a failure of the AI?  

Patryk    00:47:19    I don’t know. I don’t know. I, and also I think that if the errors were human, like maybe that would be, you know, if you get to explainability of AI right. And people understand why things fail. Right. Because, you know, if, if you’re crashing into, uh, into a parked police car or something like this, like I think that sends sort of shocks of terror throughout people who are looking at this technology. Right. Because it’s a failure, they can’t understand how that failure happened. Um, so in terms of just trusting the system now, I mean, I’m sure plenty of people trust the system now. Right. For self-driving. I would not, obviously the knowing what I know, I wouldn’t do it, but, but, but I don’t know. It’s, it’s totally a good question. I don’t know where, where people sit on their acceptance of, of, of that and what they’re expecting a human level performance sounds great. But is that really what we should be looking for?  

Paul    00:48:22    What would you imagine? And I can tell you kind of what I would imagine too, like you would, how would you feel, or what would you need to see to think something is, oh, that’s real AI,  

Patryk    00:48:35    Right. Real AI. Yeah. Great.  

Paul    00:48:39    Whatever the fuck that is. But, but so I’ll give you mine. And then, uh, well, there are a couple like criteria I think for now and, you know, ask me next week and it’ll, I’ll have a different answer or no answer. One of the things is, so you mentioned autonomy earlier. Uh, one of the things about autonomy is that I am autonomous for myself and I, you know, my, my care for you Patrick is limited. Right. And is probably on some level, uh, related to how much gain you can provide for me. Right. So,  

Patryk    00:49:11    Yeah,  

Paul    00:49:13    That’s a thing. So I don’t care about you, Patrick. Right. So AI has done not like real quote unquote AI has to not care about us, um, for one thing. And the other criteria that I’ve recently been thinking of is I just don’t think that we would be able to understand. So there here’s the thing, uh, it doesn’t care about us. It’s kind of disregards us unless it really needs us, but I don’t know why it would need us. And then the other thing is, I don’t think we would be able to understand why it was doing whatever it was doing, let alone understand what it was doing. Does that sound ridiculous?  

Patryk    00:49:48    It doesn’t, it doesn’t sound ridiculous. I, I think one has to consider what is the imperative for that AI system, right? W w w what is it, what is it sort of fundamental design or programming, right. Because there, there are things, you know, I guess instincts, you might say, or very low level drives, right. That are in all of us. Right. And, um, you could imagine that those drives might not exist in an AI system. If say the resources it needs are super plentiful, for example. Right. It doesn’t have to worry getting energy for itself. Let’s say. And so, um, so maybe it doesn’t have to worry about that. Maybe it doesn’t have to have wars over that energy, things like that. Right. And say, they’re solar power. Let’s go, let’s go matrix. Right.  

Paul    00:50:41    Um, San Diego, that’s a very San Diego thing.  

Patryk    00:50:43    Totally. San Diego, right? Yes. Um, so, so this, again goes to the fact that, uh, uh, artificial intelligence systems don’t necessarily, they may not necessarily end up being human, like, right. They might become very specialized. It’s just, can they generalize? That’s is, is the question,  

Paul    00:51:00    Well, can I just interject, because I think that there are kind of two visions, right? So, um, stemming from the question, well, you know, what do you want AI for? And, and one vision is that we’ll have all these, like very specialized AI robots doing specialized things, but maybe generalize. Well, uh, but the, and maybe these two things are related. The other vision is that we’re gonna build AI, uh, to be like the most intelligent thing in the universe, us. And that they’re like their intelligence therefore will resemble the most intelligent thing in the universe us. And I guess it depends on what you want the AI for. And I tend to think that, uh, the first, the former is, is where we should go, right. Where we’re going to have like a bunch of like specialized things serving our needs. Of course. And this is again, not an original thought.  

Patryk    00:51:50    Agreed. I think, right. I think the first one is probably much more realistic. Um, and I think less conceited. Right. So obviously, but right. You know, imagine, imagine agricultural robots where essentially food needs are, are satisfied. Right. I can tell you from working with robots, that they are constantly breaking down. Um, so I think an important part of this for me, would be that the robots would have to be intelligent enough to repair themselves, which I think would be a huge, um, a huge measure of intelligence and autonomy. Right. Not only can it autonomously navigate, but it can autonomously maintain itself or has a symbiotic relationship with other robots that can do the repairs or something. Right. Like, um, so having robots that can construct and repair themselves is, would be a huge, huge challenge too, because of the mechanics quite literally speaking, but that would lead to that autonomous type intelligence. So I think, I think that’s probably where we would want to go in terms of, you know, robots that can assist us in doing, doing things or, or saving time or things like that. Right. Um, hopefully without endangering the lives of people, you know, if that could be avoided,  

Paul    00:53:12    Let’s switch gears. And, uh, so I’m always asking, you know, how can the brain better inform AI? And you’ve been on both sides of the coin really, um, on the, I keep referring to it as industry, but I know that your industry experience was closely related to, um, sort of academic pursuits as well. But anyway, w what do you see as, uh, that that’s missing in AI? And it doesn’t have to be one thing, uh, where you think that neuroscientific research, uh, could inform AI.  

Patryk    00:53:45    I think that there are a lot of sort of theoretical there’s a lot of neuroscience theory, a lot of theoretical neuroscience concepts that could inform AI. There are things that we know about the brain and what the brain does that have to be leisure lever leverage. Did you have  

Paul    00:54:03    Theoretical ideas in mind? Or  

Patryk    00:54:06    We could take some examples, like, so just for example, let’s take the first one, which is spiking networks, right. And, you know, why does that matter? Right. Why should spiking the important, cause I think we talked about this a bit last time, right. That people will claim it’s energy efficient or something. And so that’s, you know, at a portable reason to adopt it, but I, I don’t know if that’s the most interesting thing about spiking, um, or the most relevant thing. And you can always make circuits energy efficient if you need to. Um, if you think about the most neural networks right now, they use vector representations, right? Vectors where, you know, each element, you know, maybe represents the average firing rate of a neuron or a set of neurons. Right. That’s the analogy that people have used for decades now. Um, and the vector has a direction and a magnitude, that’s it pretty much right.  

Patryk    00:54:59    You’re, you’re pointing somewhere and you have a magnitude of how much you’re pointing now. So it’s very, I think, impoverished, uh, compared to what a spiking code could do a spike in representation does. Um, so by spiking, I guess I would say it’s like asynchronous, it’s temporal it’s it has some ordering to it. It has some percentage population activity, right. All of these things, right? So a spiking, spiking pattern can be synchronous. It can be asynchronous, it could be in sequence, it could be randomized. Um, it could be play quickly. It could play slowly. Um, and it could ramp up and its activity level, right. It could still be the same pop population code, but it might ramp up or ramp down. And th the rate at which it ramps up, you know, percentage wise could be, so there’s a rich, there’s a very rich amount of encoding and representation of processing and a spiking system that’s just not used right now.  

Patryk    00:56:02    Right. And all of these different properties I talked about, I was just thinking in mind, like these are all with respect to a particular percept, let’s say, right. So I like to think it was this percept surprising or was it anticipated, right. Does it need to engender immediately or does it need to engender reserved, you know, res non-action right. Um, does it require orienting to it to find out more? Or is it just a prediction of a percent? We don’t actually see it. Right. Spiking codes could very naturally, uh, express all these things. And if you look at current networks, right. They don’t really have, when you look at errors in the perceptual, in a visual network, you know, these adversarial errors or things like that. I think having that spiking representations in code on certainty or, you know, yeah. Right, right. Exactly. So, um, so I, I, I think one thing that’s missing, but there are other concepts, right? Like sleep the concept of sleep.  

Paul    00:57:08    Oh, I was going to ask you about this. Cause I I’ve been asked about sleep recently too there. I mean, sorry that I interrupted you, but, um, I guess the most recent thing that I’ve seen on sleep, uh, trying to incorporate principles from sleep into networks, uh, is a spiking neural network where they, they found that, um, if they injected like sinusoidal noise, that’s supposed to mimic sleep. I don’t know how that is actually supposed to mimic sleep. You know, the, the headlines were all AI needs sleep too, but then, you know, and this thing is just on your SOTAL noise, right. Um, that, that helped the network generalize. But then the, you know, of course reap the, the idea of replay, uh, has been used multiple times. And in fact, that’s what helped, uh, the deep mind DQN algorithm learned the Atari games and now replay is just ubiquitously used, but that’s something that happens when we’re, uh, wake awake and restful, uh, as well. But, uh, we don’t understand the function of sleep well enough. Do we to start taking inspiration from that and building it in?  

Patryk    00:58:12    Right. I mean, I think this sinusoidal thing sounds really interesting. I’m going to look into it.  

Paul    00:58:17    Well, I’ll send you the link, you know, taking it back to so that the original wake sleep algorithm, right. Remember from, uh, pre deep learning, w w basically there was a forward pass where the recognition weights get trained, gets trained, and then there’s a backwards pass, a generative pass. And it trains like that, that has nothing to do with sleep. Right. But it was quote unquote inspired by sleep. So I think that there are just MIS I think there are clever advertisers who might use the term sleep, but I think, I just think we need to understand a lot more about what’s happening during sleep before we start claiming that these, that these are sleep related, uh, algorithms that are being used.  

Patryk    00:58:57    I think that’s a very fair point. That’s a very fair point. Um, I think, I think there are, there are other concepts that are perhaps a bit better understood, like the prediction over time, like, you know, prediction and mismatch of prediction. Um, that would be, you know, those are more tangible things that could be implemented as, as we did in a little bit of that paper, you know, I think it was definitely not just advertisement there. We were like, definitely inspired by that pretty strongly, although it’s not a spiking network as he does, he did put notes. Right. Um, and, uh, or, or the fact that there’s perhaps it’s more behavioral than anything else, but the fact that you need to take time to, to perceive or to act right. Take, you know, taking a certain types of decisions or actions take a bit of a bit more time than others, these types of things. Um, you know, aren’t really used either. Right. Um, and of course I’ll probably ask  

Paul    00:59:53    Long-term planning for life, like to graduate college or something like that when you’re in high school, something like that. Right. You have to, you know, you have to take 50 million steps to get there, but you don’t encode all the steps. Is that what your  

Patryk    01:00:08    Perhaps, or maybe even shorter term than that, right. Like certain even perceptual decisions take a bit of time to answer. Right. Whereas a lot of these systems, we sh we showed them an image or a frame and we expect, expect them to answer fairly immediately. Right. So you don’t have to do that. Right. You can actually run networks. So they accumulate information over time and then periodically do an output. You just don’t do that very often. Right. We tend to, so reframing problems to be less about image perception and more about perceiving things in video and continuous video streams, right. Might, might make a big difference, but since that’s kind of what we do. Right. And so it’d be interesting to see if that improves things. Um, I’ve also wondered a lot about, you know, the fact that we make the cods all over the place when scanning a scene and, you know, as opposed to just taking in the whole image. Um, and, you know, could there be some benefits of that for perception? You know, not to say that we need to mimic mimic humans or animals. Right. We don’t, but maybe there’s benefits that we haven’t quite recognized. So,  

Paul    01:01:11    I mean, there are convolutional systems, right. That are designed to look across images and mimics the cods in some way and then kind of piece it together every time. W okay, so let’s reverse this, uh, what, what can we put in the bin to safely ignore? Uh astrocytes can we safely ignore astrocytes  

Patryk    01:01:32    I don’t know.  

Paul    01:01:33    I’ve been, I’ve been, uh, in communication with, uh, an astrocyte fan who, who has this idea that astrocytes are really the seat of, um, like a subject of awareness and really higher cognitive functions. Right. But the whole history of neuroscience has said, ignore, ignore. So maybe AI should too. Are there other things that, um, I won’t want to pin you down on astrocytes, but I’ll, I’ll leave it open. Uh, if there are things that you, you think we could safely ignore,  

Patryk    01:02:01    It is hard to answer that question, right. It’s I think, right. You always so minimal a minimalist approach. Those say let’s try to only use, uh, the ingredients or principles, like a small set of them to see what functionality we get out of them. Right. The minimal Minimal, right. To see what, what you can get out. And so I, I think the minimum that is interesting way to go forward, but I think it’s hard to rule out any one of those items. Right. As, so if you’re going to take, if you’re going to build your model around astrocytes, what else would, would be relevant and included in that minimal set, would you have to have, you know, neurotrauma different neurotransmitter types, right. Would you need to write separately, got your glutamate from your dopamine? And,  

Paul    01:02:53    But now you’re talking beyond what current AI uses also, right. Because they’re just point processes. So like, I just, I just had Randy gala still on the podcast recently, and he has been arguing for, uh, 50 years now that, um, there needs to be some sort of intracellular, uh, mechanism for memory, something to read from and write to write. And of course you have, um, you have, uh, deep learning architectures where there is an external memory, but that’s, that’s different. So, you know, his, he didn’t say this, but I should have asked him, you know, do you think that in artificial networks, he’s not interested in AI at all, but do you think in artificial networks, does that mean that we have to build in a little read-write memory into each unit? You know, if someone came to you with that idea, would you roll your eyes? There are all these things where there are still open questions in neuroscience where you think, well, we don’t have the answer to that. How would we know whether it, we should build it in et cetera?  

Patryk    01:03:49    Right. No, I think pragmatically speaking, you know, if there was a way to, and there are, there are some really interesting, so, okay. I must say the LSTM, for example, right. It does have some interesting little internal memory system. There’s some more interesting ones that have, um, say more powerful, I guess, that have explicit, um, little, almost turning machines, right? Like little read-write memories and they’re kind of LinkedIn with that gradient descent process. So they do learn to move bits of those memories when they need to. And so it’s certainly a complex thing to add in, but, and, but could it help? And if it’s valid, it’s probably worth exploring. Right. Um, what’s the computational cost of doing it, um, of implementing these things and what’s the benefits, right? So just pragmatically speaking, right. If you really care about getting something working and this can demonstratively show it, you know, I think that’s, that’s good to your point about transformers earlier, right.  

Patryk    01:04:50    Um, if, you know, if the transformer could do what are, you know, our hierarchical hierarchical system was doing better pragmatically and give our robots better visual capabilities. Absolutely. Right. Um, similarly, you know, we could have gone the other way and said if spiking were somehow better for, you know, we would go in that direction. Right. So, so I was putting that on a, on a spectrum, right. Of transformers being like maybe a bit more removed from biology, then that model, and then spiking us closer to biology than that model. Right. Um, you know, we, we sort of just chose to ignore a lot of stuff just for the purpose of getting a minimal design to, to, to work with and understand.  

Paul    01:05:33    Here’s another way to ask the question. Do you think all things being equal and not that they ever are, is it more important to think, to use inspiration from low level brain processes? You know, like spiking for instance, or higher level, uh, cognitive science-y type, uh, behavioral processes, like the way that humans do things and just build it that way more top down that way, or is it more important to focus on sort of bottom up processes and you can’t cheat and say both,  

Patryk    01:06:03    Right. Um,  

Paul    01:06:07    I love asking you these impossible questions. This is why I wanted to have you on too, because it’s just fun for me.  

Patryk    01:06:13    It’s a really, really good, maybe  

Paul    01:06:19    The problem is you’re a careful thinker. And so you, you, what you need, you, you needed to be, knee-jerk like everybody, like a mini, like the majority, right. And say, this is the way,  

Patryk    01:06:30    Well, I guess what if we put it as the low level stuff? So I think human behavior is great. Human error is also great to learn from how about is it is using both this way? Is it low level stuff in, you know, qualified by the behavior? Right. So, right. Your  

Paul    01:06:54    That’s that answer, that’s the answer, but it’s somewhat of a direction I’ll accept that. I’ll accept it. Are there things that you have been thinking about besides sailing, uh, recently that you’ve, you’ve kind of been struggling to wrap your head around A problem you’d like to tackle that’s just beyond your, uh, expertise or domain?  

Patryk    01:07:21    I mean, I certainly have been wondering about how to, how I would move that we search the direction that we started those, you know, six years ago or whatever, how we might move that forward in a good practical way, right. Something where it’s not a massive towering computer sitting next to a tiny robot, right. To make something happen and also to take into account more about the robots on the actions. So I think there’s a whole research program that would be neat to, to plan out and figure out, um, I don’t really know how it would move forward though, at this point. So  

Paul    01:07:54    If someone offered you a faculty position, a full lab, well-funded at an academic institution at this point, would you accept, would you start your own lab if you’ve given the opportunity?  

Patryk    01:08:06    So it’s not sure I would do that an academic institution,  

Paul    01:08:10    But it’s, it’s, it’s easier to do your own research at an academic institution now. I mean, otherwise you have to convince a funding agency. Well, you have to do that at an academic institution. What about, let’s see a tenured position, right? Where they’re that? Well, that’s, that’s not fair, that’s cheating cause it’s not a real, uh, lab situation, but so, so you would rather start your own company and work on these things.  

Patryk    01:08:35    I think so. I think so. I think there’s something important about what actually what I learned and what really drives me is the importance of what you’re developing, working. Right. What matters is that it’s working and that it help it’s helpful and beneficial. And I think that if that’s what you’re working on, yes, it does take more work to get it out there. And, and so, but I think that’s maybe the difference, the term academic and colloquial speech, right. If something is academic, it means it doesn’t have that utility necessarily. Right? Yeah. It’s academic. Right. That’s how people say, oh, that’s very academic. Right. So I think that’s maybe right. But that’s maybe where I’m going at. Right. So, okay. If there were it doesn’t, I guess it doesn’t matter if I were in a faculty position or not. Right. That doesn’t really matter. Unless somehow the faculty position had a huge number of responsibilities that would detract me from  

Paul    01:09:44    Administrators. That might  

Patryk    01:09:45    Be part of the issue.  

Paul    01:09:47    That’s why I that’s why I didn’t say if you were hired to be, if you were offered a chair position or something, because that’s unreal, isn’t it  

Patryk    01:09:55    Probably, that’s not core to my interests right. At this stage, maybe in the future, but right. That kind of thing is a quarter of my interests. Um, I also not sure I want to train graduate students, uh, you know, either  

Paul    01:10:09    Because you’re autonomous and you don’t care about people like me, et cetera,  

Patryk    01:10:12    The case I’m happy to, I’m happy to advise them from afar for  

Paul    01:10:18    A lot of money, so you can solve problems.  

Patryk    01:10:23    I’ve had great, great conversations with people that I advise that says it’s great. Actually teaching is something I also really appreciate. I, I love, you know, it’s been over zoom, but having calls with people, I know who, you know, maybe they already have their PhD or whatever, but just, just having conversations and potentially even working on a little contract it’s it’s, you know, I do seriously take, I think on our degrees that says something about the responsibilities as well, that came with our degree in sort of futures. And I think teaching is a big part of those responsibilities. And so I definitely appreciate being able to do that occasionally too, but,  

Paul    01:11:03    Well, the, it also said there was also like when we were in graduate school, the slogan was from bench top to bedside. Right. And that’s a very problem solving practical thing that I just dust, you know, shoot away. Yes, yes, yes. From bed a bench top to bedside. Yes. That’s what we’re doing.  

Patryk    01:11:20    That’s true. So yeah, no, I think it’s just about mindset that wherever it’s just about mindset, whatever position, right. Is I think you could do it in industry. You can do it in academia and you know, there, there are hybrid things that are popping up now too, which  

Paul    01:11:34    It’s  

Patryk    01:11:34    Exciting. It’s very exciting. So yeah. Any, any place is good as long as you can do what, what, what you think is important, right.  

Paul    01:11:44    Given your comb shape, uh, if you had a look back right. Thinking and starting from, I don’t know when the, when you consider your beginning, quote unquote, but let’s go back. Well, you can choose if you would take a different trajectory, if you would do anything differently, uh, in, in your earlier career looking back.  

Patryk    01:12:07    Right. I, I don’t think I would change anything. I mean, that’s certainly the past was not certain. It didn’t feel very certain as I was going along the path, but given where I am now looking back. Yeah. I wouldn’t change anything because I think where I’ve arrived at is a very, that’s an interesting point. Right. So could I have gotten here faster if I had stayed true to my interests and really pursued them possibly,  

Paul    01:12:36    But then you wouldn’t have learned nearly as much if someone stays true just to their interest. Doesn’t that make them more, I shaped,  

Patryk    01:12:43    It might, it might have. And I think that, and I, I think I might not have arrived where I am today if I had, if I had forced that. Yeah. So  

Paul    01:12:53    Do you, do you feel like you’ve made, because often people give the advice that you have to step out of your comfort zone. Do you feel like you’ve done that over and over or has it been always kind of a small step and you’re, you’ve been kind of comfortable in your role or have you made that a point to step out of your comfort zone and, or have you felt like you actually have  

Patryk    01:13:12    Good question? I think I have stepped out of my comfort zone, but with Gusto, I think with, with interest in that. Right. So,  

Paul    01:13:23    So you’re comfortable because you’re interested. I, you all, you’ve always seemed comfortable to me in whatever situation you’re in. And that’s also something that I have admired from afar, I suppose. This is the first time I’m telling you that, but,  

Patryk    01:13:37    Well, Robert, I wish I were comfortable in the situation. So as comfortable as I appear to be, I think, yeah, no, I think there’s, yes, it’s, it’s challenging to go into new environments and solve new problems. Right. But I think it’s also very, very interesting. And so I guess it’s just the outlook of, if there’s an interesting problem, if you’re willing to learn and if you’re, if you can be comfortable with not knowing all the answers and ideally you’re in an environment that’s supportive of that, that’s the best, that’s the thing. I think it’s a fine, um, but it just it’s, it is hard to find, I think. Um,  

Paul    01:14:17    But, um, is there anything that you wish you knew earlier on that you felt like if I’d only known this or had this X skill, it would have a 10 folded my, uh, progress, my rate of progress. This is a, there’s a related question. I know you’re thinking and I’m sorry to interrupt your thinking, but a related question I was going to ask you is, so you had software development essentially before software development was required in neuroscience, right? So coding skills, for instance, and you know, it wasn’t required when I started graduate school, I had to learn coding in graduate school, but now it seems, um, it seems to be a requirement is, is coding slash software development. I know that those are different things, but it is that computer side of things, the most important thing to the most important skill to have, uh, or to learn for what I’m calling neuro AI, computational neuroscience, the study of intelligence,  

Patryk    01:15:21    Possibly, I think it is important to be able to write software. I think it is a tremendously helpful, tremendously helpful. Um, I would, if someone, I guess I would highly recommend learning something other than just the Python. Um, it’s possible. I don’t know, after 10 years, I still don’t like Python  

Paul    01:15:54    Writing.  

Patryk    01:15:56    It’s terrible. But anyway, um, I think another thing that would be interesting, like if something that would have been beneficial well, it’s, it’s hard to know. Right. So I’ve always, you know, FPG is the sort of developing things on, in digital. So there’s a way of essentially re chips that can be reprogrammed, reprogrammable, microchips. And that’s the one thing I don’t quite have yet. So I’ve done barely the basics on it. And I feel like that would be tremendously helpful to have moving forward, but, but again for, yeah, for robotics or for research purposes for building perceptual systems, understanding how to translate software ideas into hardware, uh, is, is something that, but I can’t say, you know, having known that would, that have changed my path at all right. Without, uh, allowed me to be a 10 X potentially. Right. I actually, I think, uh, on wall street traders are using FPG AEs to have really fast, uh, trading algorithms.  

Patryk    01:17:03    Right. So that is a case where they are getting a 10 X benefit of that. Could I have somehow gotten a 10 benefit on something? I’m not sure. Um, if it would have been anything I had ended up doing. Right. Um, but I, okay. I think one of the most important things for neuro AI though, is to overcome the introspection bias, uh, where, like you, you you’re you think that things work a certain way because you think that’s how they work in your, in your brains. Sort of, I think people come up with theories about how their own mind works and right. So rather than being driven by sort of the scientific research and findings, they they’re driven by how they think things work. And I think, you know, the reason that things like chess and go and all these video games, right. People view these, all these are human activities or logic, right? These are quintessential intelligent human behaviors. Right. And they’re not really right.  

Paul    01:17:58    Going back to, GOFAI going back to good old fashioned symbolic AI. I mean, that was, you know, they would videotape people playing chess and look at the behavior. Right. So that’s always been kind of the intuitive introspective, although I guess that’s, uh, an attempt to be objective about it, but they’re still relying on, you know, this chess is the way to go, right?  

Patryk    01:18:18    Yeah. Chest with the way to go. Right. It’s sort of, maybe it’s a bit of an ivory tower concept as well. Right. That just, you know, look outside of that for, you know, the, the real problem, right. What is the actual problem that, that we would like to solve?  

Paul    01:18:33    Isn’t that the hardest thing to do though, because we are all as scientists inundated with our own biases and this is a known awful problem. That’s what we need it to develop AI for is to remind us how biased we are and to a degenerate alternative hypothesis.  

Patryk    01:18:49    I think that’s key. That’s exactly key. Yeah. There’s all this concern about bias and AI, but you’re right. There’s a bias.  

Paul    01:18:57    I know. Right. So that’s the thing we’re trying to get rid of in AI, but we were perfect and we have bias, so put it in that’s right.  

Patryk    01:19:06    No, that’s exactly right. It should, it should help us by working on these. We should. Yeah. It should be very helpful as a exercise and as an end product. So build these, uh, systems,  

Paul    01:19:19    Patrick, this has been super fun for me. Uh, I, you know, usually when I have a guest on it’s the first time I’ve met them or have I met them once or twice before, uh, you I’ve, I’ve meant met, uh, multiple times over the, over the years. And I look forward to continue meeting you over and over and 10 X-ing my own knowledge base by asking you questions. This has been really fun. Thanks for coming on.  

Patryk    01:19:40    Thanks Paul. Thanks for having me great to chat and I’m looking forward to the next time.  

Paul    01:19:51    Brain inspired is a production of me and you. I don’t do advertisements. You can support the show through Patrion for a trifling amount and get access to the full versions of all the episodes. Plus bonus episodes that focus more on the cultural side, but still science go to brain inspired.co and find the red Patrion button there to get in touch with me, emailPaul@brandinspired.co. The music you hear is by the new year. Find them@thenewyear.net. Thank you for your support. See you next time.  

0:00 – Intro
2:22 – Patryk’s background
8:37 – Importance of diverse skills
16:14 – What is intelligence?
20:34 – Important brain principles
22:36 – Learning from the real world
35:09 – Language models
42:51 – AI contribution to neuroscience
48:22 – Criteria for “real” AI
53:11 – Neuroscience for AI
1:01:20 – What can we ignore about brains?
1:11:45 – Advice to past self