Support the show to get full episodes, full archive, and join the Discord community.
Check out my free video series about what’s missing in AI and Neuroscience
Srini is Emeritus Professor at Queensland Brain Institute in Australia. In this episode, he shares his wide range of behavioral experiments elucidating the principles of flight and navigation in insects. We discuss how bees use optic flow signals to determine their speed, distance, proximity to objects, and to gracefully land. These abilities are largely governed via control systems, balancing incoming perceptual signals with internal reference signals. We also talk about a few of the aerial robotics projects his research has inspired, many of the other cognitive skills bees can learn, the possibility of their feeling pain , and the nature of their possible subjective conscious experience.
- Srini’s Website.
- Related papers
Transcript
Srini 00:00:03 I would say that, you know, no organism perceived the whole world, uh, in its total reality, as it is, I would say, you know, each animal experiences the world through the window of its own visual and procedural capacities, right. And its limitations, you don’t need to be necessarily slave bio art idea was to basically test to see whether the principles be extract from honeybee, vision flight, and navigate and guidance. For example, do they really, can you use optic flow information to do all the things that insects do, you know, for a long time, bees and other insects were really consider to be rather simple, reflective or atomic times. Patent recognition in bees has gone a lot beyond this over the last few decades, decades. So they’re, they’re able to learn abstract properties of patterns.
Speaker 2 00:00:56 This is brain inspired.
Paul 00:01:09 Hello, good people. I’m Paul. So it’s often stated that the human brain is the most complex object in the universe. That’s probably not true, but we do know it is complex and our to understand how our brains work and or to build humanlike intelligence is constantly running up against that complexity. And of course, we study the brains of other organisms as models or a proxy of our own brains. Hoping we can extract general principles often from brains, much smaller than our own. For example, the B has a brain with about 1 million neurons. That’s a big number, but it’s also tiny compared to our 86 billion neurons, but organisms like bees with relatively few neurons, keep surprising us with impressive feats of intelligent behavior. Today. My guest is man, Jon Sereni Osan or Sereni, who’s an emeritus professor at the Queensland brain Institute in Australia. So you know, about the be waggle dance, I’m sure where a B will fly to a location and forage in that location.
Paul 00:02:12 And then fly back to the hive and do a little dance to communicate to the rest of the hive where the good food is Shreeni has for a long time, studied the abilities of bees and other insects and birds. And in this episode, he shares much of what he has discovered specifically about bees and how they navigate and how they use perception to control their flight speed and path and their graceful landings. For example, we also discuss a few of Sereni robotics projects, where he applied principles of B flight, uh, to create the automated aerial robots that can take off and fly and maneuver and land quite well. And beyond flight, we also discuss some of the higher cognitive abilities. Bees can be trained to perform the possibility of their having subjective experience and their capacity to feel pain in the show notes. I link to a nice review that summarizes a lot of what we talk about in this episode. Show notes are our brain inspired.co/podcast/ 134. Thank you to my Patreon supporters. Here’s a little something extra for you today, which I hope captures my gratitude for your support and bonus points. If you can reference this little Diddy,
Speaker 4 00:03:21 Thank you for being a friend, travel down a road and back again.
Paul 00:03:29 Thank you guys. Okay. Enjoy shrin When I got into neuroscience, like, like many people, I, uh, I, I was interested in the quote unquote, big questions like consciousness, and I ended up studying, um, monkeys in a laboratory setting. Why, uh, why insects? And I don’t mean that as an insult. Uh, what, what drew you into studying, uh, bees and insects and birds.
Srini 00:03:55 Okay. Thank you, Paul. Thank you for having me. First of all, it’s a great, uh, great, good delight and a great honor to be with you on the show. Yeah. Thank you. So just a little bit about my background. Uh, I was, I was born and raised in India. Uh, I did my bachelor’s degree in electrical engineering and my master’s degree in control systems, uh, in Bangalore, uh, and towards the end of my master’s degree, I think I underwent the kind of a premature mid crisis. Uh, I wanted to pursue something other than just a straight standard career as an electrical engineer. Uh, so one of my professors suggested that I could try to maybe apply my knowledge in Al engineering and control systems to modeling the function of a functioning of a biological system. So we decided to model the system that controls the movement of the, uh, uh, when it tracks a moving target, modeling it as a server mechanism, you feedback control server server mechanism,
Paul 00:04:48 Smooth pursuit, the smooth pursuit I moved, smooth
Srini 00:04:50 Pursuit ands, right? Yeah. And this turned out to be a very engaging, engaging sort of project. And it was a lot of fun. Um, and so when I went to Yale to pursue a PhD student engineering, I was again keen to do research that was sort of at the interface between engineering and biology. Uh, and the only person I could find there at that time who had similar interests was a professor by the name of Gary Bernard and who was studying, uh, and modeling the optics of insect eyes. Uh, this sound is quite interesting and I jumped right into it. In those days. You didn’t worry about future career prospects. You just followed your heart, you know, and let things happen, which are wonderful. Many things are not the same these days. Unfortunately,
Paul 00:05:31 How is it different these days?
Srini 00:05:32 Oh, my, my students for example, are constantly worried about, you know, whether this projects are going, give, get them a job, going to get them, allow them to continue in academia, you, that sort of thing. What’s in it for me. Uh, they’re not so interested in what the intrinsic questions that are being asked and how interesting they are. That seems to be secondary.
Paul 00:05:51 So you went kind of, um, naturally went from control systems and engineering into studying. What, what has, you know, classically been studied from kind of an engineering perspective because the, the psychotic eye eye movement system and the smooth per pursuit eye movement system are both very well understood circuits. So that was probably a yeah. And then that was probably a smooth transition to, to the optic, uh, studies and insects. Uh,
Srini 00:06:15 Thanks. Exactly, exactly. And it, it, it seemed, uh, an interesting thing with the PhD was that, uh, you know, uh, Gary himself was, uh, really more into the optics and the photoreceptor aspects of it. Whereas I was more interested in, in sort of higher level processing that drives the behavior. So, uh, it let me be fairly independent, uh, as a PhD student and, uh, you know, so it was nice. We had almost like two separate sort things going on in the same lab, which is really, which is really good, fun. So it taught me to be a little more independent, you know, fairly early in my career, which is really good.
Paul 00:06:47 <laugh>, that’s a nice balance. Do, would you, do you recommend that kind of balance if, you know, thinking about people have struggle to find the right advisor and what’s the perfect match. And so you, are you suggesting that it might be nice to have? Yeah,
Srini 00:07:00 I, I, I ideally if there’s funding to allow a student to do, uh, whatever they want to do for their PhD, and if the supervisor, uh, in, in this case myself feels competent enough to, you know, supervise that particular project, then that would probably be the best solution. Uh, unfortunately there’s so, so many different constraints cuz students come quite often. Uh, you you’re hiring them through a, through a grant which, uh, forces you to do, uh, you know, a particular kind of project. And you have to sort of keep them, uh, on, on that straight and narrow project to make sure that the, the project succeeds, right. Mm-hmm <affirmative>, and, and there’re all these constraints, which are really, um, it’s sad, but it’s no, it wasn’t the, in the old days, uh, it was, it was so, uh, different, you know, you went to do register to do a PhD with a professor, you had your fellowship or scholarship and, and, uh, off you went, there was, there were no constraints. <laugh> all you had to do is to do good work and, uh, produce a lot of good publications. And that was it.
Paul 00:07:54 Well, tho those problems are behind you, right. Because you’re three years retired now.
Srini 00:07:59 Thankfully. Yes, thankfully. Yeah. Nowadays my, my engagement is all, all fun and no responsibility. So I’m, I’m sort of a, a backseat operator in some of a couple of grants, research grants, but I’m not getting any money I’m deliberately not asking for any money. Uh, I just want all fun and no responsibility. Oh, that’s, I’m kind of an armchair advisor. Yeah. <laugh>
Paul 00:08:19 Very nice. Well, you’ve done a lot of work over the years, um, on B in particular navigation and you’ve worked on birds as well. And we may talk about birds, but mostly we’ll probably talk about bees and their navigation abilities and flight abilities. Yeah, yeah. And their cognition. Mm-hmm <affirmative>, mm-hmm <affirmative> so what, um, give us a sample. What, what has your work, I mean, this is a, I know it’s an impossibly, um, large question because you’ve done so much work, but, um, what, what has your work, um, revealed about B navigation and flight?
Srini 00:08:50 Yeah, sure. I could start by telling you a little bit about how, how I got into be, uh, in the first place, because at the Yale I was really working with, uh, with houseflies, uh, looking at the movement of taking system of the house fly and seeing how it guided, uh, you know, a core stabilization and how it also enabled the flight to pick up, move small, moving target, like other flies, uh, and chase them, um, for the purpose of mating or territorial defense. So that was the project for my, the PhD at Yale. But then I moved on to do a postdoc at university of Zurich in Switzerland, where I also had to lecture in <laugh> learn, learn German and lecture in German.
Paul 00:09:23 Oh my gosh.
Srini 00:09:24 But anyway, <laugh>, that was quite a challenge. Yes <laugh>.
Paul 00:09:27 Are you still, are you fluent
Srini 00:09:28 For students? I, well, not anymore. Uh, but you know, even, even when I was fluent, I, I, I have, it was stressful me. It was also even more stressful for the students because the poor student had to kind of try and understand what I was trying to tell them <laugh>
Paul 00:09:41 Yeah.
Srini 00:09:42 <laugh> but, but Zurich was such, such amazing place. It was part of the professor, Rudy VA, who introduced me to the world of bees. And it was there that they realized that the be is an amazing learning machine in, you know, and what I found, not only can they learn amazing things very quickly, but what’s really appealing is that you can study their behavior by tapping into the natural lifestyle mm-hmm <affirmative> so you can sort of entice the beach to come into your lab, you know, draw it in with a nice draw, you know, sort of water of sugar water, move the sugar water feed in step by step into your lab. And they’re coming in of their own free will. I mean, there’s no coercion here. They’re free to go somewhere else if they find better food, but we have a very strong secret solution that they find it quite tempting and, and they come and, and the thing is, you, you you’re, you’re catching them and you’re sort of observing them in the, in the sort of a natural style of behaviors.
Srini 00:10:32 And so they, you’re not really, you’re not being forced, not like a caged, uh, rat or amongst that you’re trying to train. Right. It it’s really a freely moving, uh, behaving animal. And also once you have them coming into your lab and visiting your apparatus where you, which you’re designed to, uh, try and answer the question you question that is set up, I mean, they fly back and forth and you can film them in a way that actually addresses the question that you want to answer. If you’re studying a, another creature, for example, like a flyer or a dragon fly, you have to have your cameras trained on the creature all the time, and then wait until find the right moment when that behavior occurs. Right? Whereas here, you don’t have to do that. You haven’t directly coming to you and they’re doing exactly what you want to do in the right location. So you’ve got full sort of, uh, observable observability, I, which stand very efficient way of kind of, and recording what they’re doing in the lab. So, uh, that, that was really, uh, very nice about be. And of course, uh, uh, no bees are hurt or hound in the whole process. Uh, once you finish the experiments, they’re free to go back and continue the normal for lifestyle.
Paul 00:11:33 How long do, how long does a be live? I forget
Srini 00:11:35 They live, uh, about, about four most. Okay. But typically our experiments are run for, uh, three or four days. And for three or four days, you’ve got the answers you want. Mm-hmm, <affirmative> either you got them or you don’t have them, but <laugh> in three or four days, you let, let them go.
Paul 00:11:48 It must take what takes probably what two days to entice them in with the sugar water and, and get them comfortable coming.
Srini 00:11:54 No, no, no, no. Well it depend no, uh, well in Zurich we had, um, the way it works is that you have a, we, we were working in one of the upper levels of the building and, and there’s a balcony outdoors. So over there you have a, a, a, a sugar water feeder there permanently all the time. So bees already know that location. There’s a few beehives, uh, sort of on campus, but, uh, they don’t necessarily have to come from any of these, uh, beehives either. They can come from anyone’s, uh, home, you know, someone could have a beehive at their home. They could come. Uh, and, and what you do is, um, the, the way you start an experiment is the, the feed of water. The sugar feed solution, uh, outdoors in the balcony is, is not very strong, it moderate. So it keeps the bees interested.
Srini 00:12:35 So one bee comes along and then you sort taste it and says, go back and does, does it, if it likes, it goes back and does the dance and turns all the other be, Hey guys, there’s something interesting, come and check it out. So more and more bees come in and you have a steady steam of string of bees coming in, uh, building that feed. Now, if you want to start, when you wanna start an experiment, you, you, you take this feeder and then, uh, move it in step by step into the lab, through the doorway, into the lab, and you can move the feeder forwards by, you know, about a foot. Ooh, uh, every five minutes. Ah, so, uh, quite in fact, people have found that bees can actually predict if, if you’re steadily moving a feeder, they actually predict the speeder movement of, and look ahead when they come next time on the next visit. So, wow.
Paul 00:13:17 So they, they, they go, they predict the, the next location. Huh. And you said that the sugar water was, uh, weak at first, is it, are you strengthening it, strengthening it as you move it?
Srini 00:13:26 Yeah, we’re strengthening it, but you go, sorry. Thanks for pointing it out. Yes. We’re strengthening it. Uh, so to, and it gets more BES excited, of course, then more and more BES coming in. Uh, and you’re radi, not the experiment, but you don’t want, you know, a hundred hundred BES visiting your apparatus, right? Yeah. So what you do is, uh, at that moment, what you do when you got the right number of bees coming in, you mark them individually with, uh, with colored dots. Mm-hmm <affirmative> um, and then, uh, at the same time you place another feeder, uh, outside, uh, the door and that’s a weaker sugar. And that acts like a, sort of a decoy because, uh, the NewCos that have been sort of recruited by these really enthusiastic be that are going back home and dancing. They come and look at the most obvious spot, which is the feed outside.
Srini 00:14:07 And they have a taster that for sugar, for all the solution, they say, ah, this is not as good as it cracked up to be. It was a false alarm. And so they get disappointed and don’t come back again. Oh. So that way you can control the number of bees that are coming in and make sure it’s only the bees that you marked individually that keep coming again and again, faithfully and all the other bees, the sort of recruits come outside and get disappointed and they don’t enter the lab. So they don’t disturb the experiment.
Paul 00:14:29 A lot of what you have done with bees are behavioral experiments looking at, you know, how they navigate, right. So, you know, as they’re coming in, I mean, you’re gonna, you, you can describe this much better than me, but as they’re coming in, you you’re then having them fly through like different tubes and different, you’re putting different, um, optical shapes and patterns, uh, that, that shows how their flight varies. And through this you’ve you found out a bunch of stuff. I don’t know. How would you summarize <laugh> some of what you’ve learned about how they do it?
Srini 00:14:59 Well, I mean, uh, it all started with a fairly, uh, random chance observation. Uh, we found that, uh, when bees were entering our lab, some of them would take a shortcut and flight through a whole in, in, in, in the wall rather than, uh, fly through the open door. Uh, and we noticed that when they flew through this hole, they’re flying fairly precisely down the middle of the hole. Mm-hmm <affirmative>. Uh, and we wondered about that because how can they actually, um, you know, fly so precisely down the middle, when in fact they cannot measure distances, uh, the way we do, because the stereo vision is sort of compromised because the two eyes are actually very close together. And if my eyes are fairly farther, far apart, but each centimeter is apart. I think so if I look at my finger with, uh, one eye and then with the other eye, the image of the finger is displayed between the two eye.
Srini 00:15:46 Right? Yeah. And so your, your, your stereo human sort of stereo system is, uh, working out the triangle. It’s measuring that disparity or displacement and working out how far away your finger is. So that’s how, uh, 3d me stereo works. Right. But if you now take these two eyes and move them progressively closer and close it together, until you come to the point where you have, where insect, when you, the two eyes, actually a couple of millimeters apart that shift or disparity becomes very small and it’s very hard to measure. So, uh, insects really cannot rely on stereo unless some object is very close in which cases start to get a bigger shift angle of shift, I should say. Um, so insects really have sort, sort of, uh, evolved or come to rely on a completely different way to see the world in 3d. And that’s to actually move in the world actively and, and look at how rapidly images moved, you know, uh, in their eyes as they move past them.
Srini 00:16:37 And if something is nearby, it moves very rapidly. If you move in a straight line, and that tells you that this rapid, this rapidity of by movement tells you, well, if image move motion, I should tells you that this object is very close, where something’s very distant, like, uh, you know, a hill or a clouds in the sky. Those don’t move very much at all. If you’re moving in a straight line, and that tells you that those, those objects are actually very far away. So insects, somehow we Evolv to, you know, translate image, motion into 3d dimensional object distance. And we’re doing that too unconsciously. You know, when we move with one eye closed, we are in fact taking advantage of eye movements and head movements and eye movements to, to GLE some information about the 3d measure preceptor of the world. But this is all this has come about, starting from, you know, this is saying to go back to what I was saying, we flew down the middle and we are saying, given the fact that these bees don’t have stereo, how could they possibly be ging the distance and balancing them on the two?
Srini 00:17:34 So, right, right. So to, with one thing we wanted was, are they really sort of flying in through the passage in such a way that the two rims seem to, uh, move past their eyes past the two eyes at the same speed, that one way of balancing your sort of, uh, position in the middle of the hole. Right.
Paul 00:17:51 That was your first guess.
Srini 00:17:53 That was the first guess. Yeah.
Paul 00:17:54 <laugh> wow. Okay. All right. How often does that happen?
Srini 00:17:56 So, so it was this, this is, that was a very lucky guess. I mean, it could have been wrong. I mean, so, so we put bees in this tunnel and then we had the, in the old days, we had convey belts to move, uh, the walls, you know, where they had sharp pattern. We moved them. Of course, you had a per seat in the middle to make sure there were no wind currents so we could influence their flight. So the bees were, were, would be flying down this tunnel to the end, to get a food reward, and then they’d fly back mm-hmm <affirmative>. And we filmed the flight trajectory from a above, uh, to see how they flew when they flew down the tunnel. And when they flew down, when both walls were stationary, they flew fairly precisely down the middle. Well, give or take one or two centimeters, but you pay BS are only human. They make mistakes too <laugh> but fairly, precisely down the middle. Uh, however, when we took one of these walls and moved it in the same direction as the incoming B, then the BS flew a lot closer to the moving wall. And we think this is because when the B in the wall and the same direction, the image of velocity I seen by that eye is much lower. Right? Mm-hmm <affirmative> so the Bing that wall is much further away. Uh, so she moves closer to that wall to, to make up
Paul 00:19:00 For, take it to the center, to where she, she thinks the center is
Srini 00:19:03 To the exactly. She, she thinks she’s centering. Yeah. And exactly the opposite thing happens when you move this wall in the opposite direction to the bees in coming, uh, flight direction, because then you’ve got a large image motion on this side. So the beating, Hey, there’s something dly close to this, this wall to the surface, I’m very close to the surface. I better move away from the surface to compensate. So this simple experiment really, uh, taught us that bees really, uh, are flying through these, uh, passages, narrow passages safely by balancing the optic flow, as, as they say in the two lines. And you can also make predictions about where should they position themselves. If you move one of these walls at a certain speed, for example, you can do the little mathematical calculation, Uhhuh and work out where they should be. And it seems to fit really quite nicely.
Srini 00:19:47 So, uh, that seems to be the way they’re actually flying down in the middle. Uh, but once we had the tunnel going, there were lots of other experiments we could sort of things we could test, uh, for example, control of flight speed. How do they control their speed? Uh, we, we noticed that, um, if it’s a constant, uh, with tunnel, uh, they fly, um, at a fairly constant speed right through the tunnel. Initially, what we did was, uh, when they’re flying it through this tunnel, uh, the walls were stationary. Now what we did, they, they flew at a certain speed. Now we moved both walls forward at a certain speed Uhhuh <affirmative> and the bees increased their speed by the same amount as the wall speed. And when you move both walls backward, the bees slowed down by the same amount as the speed. So what they’re trying to do is to keep constant the angle of velocity of the image that the two eyes are experiencing as they fly through this environment. And this is one way of controlling or regulating your flight speed
Paul 00:20:45 When, when they’re flying in an open field, though, for instance, right. When everything is essentially at infinity or, or something, are they going max?
Srini 00:20:51 Exactly. Exactly. You, you, you, yeah, yeah, yeah. Bingo. You, you hit the right question. <laugh> okay. So, uh, the next step, the next question is the tape, the TAPD tunnel. So if you fly them through a taper tunnel, okay. Initially the, the, the walls are fairly far apart and the bees are flying at a certain speed. Uh, speed. Speed. Okay. So by the way, that, that, that first experiment with the, with the, with the, with the, with the static, uh, tunnel of the walls, moving the same direction backwards forward, uh, told us that they’re trying to hold an angle of velocity of about 300 degrees per second, constant as they’re flying through this tunnel right now, if you go to this tape width tunnel, which is why to begin written, then narrows down. You find initially when they come in, they fly at a certain speed, fairly high, and as they progressively move into the tunnel and the tunnel narrows down, they fly slower and slower.
Srini 00:21:42 So what seems to be happening is they’re trying to measure, keep this image velocity constant throughout the passage, but initially, uh, they’re keeping it constant at some 300 degrees per second. Let’s say then, as they could go in closer, the walls are much, uh, further, the walls was closer. So it increases the angle of velocity image, right? But the beasting they’re speeding up. So they’re slowing down to ate for that. And so they keep slowing down further and further until they reach the narrowest part of their tunnel, which is the neck. And then when they went, starts to flare up flare out again, that they, they speed up again. And even through this, this flare taper tunnel, they’re maintaining a constant image velocity of 300 degrees per second. This is a very nice way to ensure that when you, when you’re flying in a wide open environment, you fly fairly fast. And when you enter a dense clutter environment, you automatically slow down. And, uh, nice thing is you don’t need to do, uh, the standard thing that computer vision people and machine vision people will be doing to measure distances to various obstacles and saying, Hey, what should my speed be? You just measure the global average image velocity so that when you’re flying an open metal, you fly fairly fast. And when you enter a forest or something, you automatically slow down to an appropriate speed.
Paul 00:22:47 So they must have an internal reference signal that they’re trying to match, essentially that 300 degrees per second. And so that’s kind like a, exactly, exactly in control theory terms, that would be the reference signal, but that would be an internal innate, I suppose, innate signal. I don’t know if you meant, um, min it this way, but you said they think that they’re slowing down or they think that they’re speeding up depending on how you manipulate their, their environment, but then as they actually are slowing down or speeding up, they, they also must have an internal reference signal for the, um, output of their wing, um, expenditure. Right. So
Srini 00:23:21 Sure, sure, sure, sure. It, it, I, I I’d use that term just to make the, uh, make the explanation more, they could have, uh, the, the trusted you, uh, good for your wing beach. Of course it, one thing there’s also the, uh, the air speed, you know, you could be flying against, uh, you know, headwind tailwind. So they have, uh, mini insects have these, uh, hairs on the tox and the, um, and the abdomen and the Anine as well, which act as wind sensor. So airflow sensors. So they’re measuring their air speed as they fly through the environment. And then these also not, not where you well studied, we don’t know exactly how to, what extent it affects their flight behavior, but they’re certainly there, the sensors are there. Oh
Paul 00:23:56 Yeah. So, so you solved, uh, B B um, flight optic control of flight there. Well,
Srini 00:24:03 There was, there was one part of it there, the more, the next step. Can I describe the next step? Oh yeah.
Paul 00:24:09 I wanna ask you about landing in a second too, because that,
Srini 00:24:11 Oh, landing as well before we go into landing, this is, this is more about, uh, flying through the environment, going to a food source. Um, how, how do bees work out how far they’re flown, because be that it’s gone to a particular food. So it has to come back and signal through its dance, uh, how far it’s flown and it gives the information and polar coordinates, right? The distance as well as the direction mm-hmm <affirmative>, uh, to go. Uh, so the direction of the dance, the vital dance is, is a measure of the, uh, uh, indication direction to go. And the duration of the wa dance, uh, has information about, um, how far away the food sources, the longer the duration of the Wael, the further away the food sources. Mm-hmm, <affirmative> roughly linear relationship between the two. Now, how did they work out? How far, uh, they’ve gone now, we’re not the first ones to have done this California first, you know, Beau to, uh, who did all the wonderful work and on bees, every aspect of bees, um, looked at two and he did something really quite clever.
Srini 00:25:08 What he did was he had bees flying from a high to a feeder, uh, and he sort of, uh, 500 meters away, let’s say at quite 10 distance away. And when they came back, he, he, uh, looked at their dance and filmed it. They, they were indicating, uh, you know, uh, they were calibrated for 500 meters and then he put these tiny led weight on the bees and made them fly the same distance. And when they came back, they were now reporting a much larger distance. So, uh, Juan first decided that this was probably, uh, uh, they’re using a measure of energy consumed to signal the flight distance, right. That seems like the most obvious thing when you’re carrying a weight, it’s sort of, but we think that may not be the case. So what, so what, what we did was the following. So we, we trained bees to fly down this tunnel.
Srini 00:25:56 Again, the very short tunnel, uh, in this case, it was about, I think about eight meters, uh, a fairly short, narrow tunnel to a feeder, and then come back. And even though they flow and only this very short distance of eight meters, they were signaling something like 300 meters in their dances. They were being comprehensively, they were comprehensively overestimating the distance. And we wondered, why is, why is this going on? And one of the possibilities was that they could be measuring not this flight energy consumed, but actually measuring how far, the image of the world, how much the image of the world and move past their eyes as they fly from the, uh, uh, you know, from their hype to the, to the food source. Mm-hmm <affirmative>. So the measuring it visually. Now the, the thing is, um, because, because the walls have sounded very narrow, even a small amount of forward motion causes a huge amount of image motion, right?
Srini 00:26:50 So the things have gone a long way. It’s a bit like, you know, if I were to, uh, fly from, uh, let’s say, Brisbane to Sydney and look down at the ground, the ground is so far away, it wouldn’t appear be moving at all. So I wouldn’t think gone long way. Whereas if I were to drive from Brisbane to Sydney, everything is very close to me. There’s a lot of image motion, and I think I’ve gone a long way. So we think that the odometry, uh, the Bezo Dory is working visually. And the way the test is of course, is to remove that, uh, image queue, uh, that image motion queue and see what happens. So if you only the tunnel would have a textured pattern, a randomly textured pattern or, or, or radical black and white stripes would be, if we change that and make the stripes horizontal, then when the bees are flying down the tunnel, the same tunnel, they don’t experience any optic flow because the flying tile to the stripes. And then when they come back home the dance, again, the signal almost zero distance. Right? Right. So, so they really are. So it looked like the ODOT is really visually driven. What we think went wrong with, well, not wrong. It was the wrong interpretation. Maybe with Carl Fran Fisher’s experiment was that when you load these BES, they probably tend to fly closer to the ground. Oh, that increases the optic ES by the ground. And so they, the ODOT is telling them to go for the distance.
Paul 00:28:05 That’s a nice segue into the landing. I, I wanna come back also to, to, to, I mean, there’s a lot of questions I have, but, um, I don’t know, maybe if that’s the right, if this is the right time then, because their landing is, is con controlled, I suppose, by this, uh, a very similar optical flow kind of, um, system. Correct?
Srini 00:28:23 Yeah. Yeah. That’s the thing. So it, the, the, the, the same system that keeps the, uh, the flight speed constant tries to keep the image velocity constant at about 300 degree per second, when they’re cruising. That same thing is put to a different news when they’re landing, because we thought, okay, the way we did this was to simply train bees to come and land on a, uh, on a drop of sugar water place on the horizontal, um, texture and surface, uh, really can be a wooden, uh, you know, table with some, with some grains so that you could do some texture that they can see mm-hmm <affirmative>. Um, and, uh, and if we film them in 3d dimens, as they came into land, uh, and analyzed the, the flight trajectory data, and we found that, um, its a very simple thing when they’re flying high, they’re flying fairly fast and they’re fly low, they’re flying slow.
Srini 00:29:07 And in fact, the speed of flight, the speed of approach, I should say, uh, to the target is very strictly proportional to the height above the surface. And what that is telling us is that they’re actually, as they’re coming in, uh, they’re keeping the image of velocity of the ground constant in their eyes as they’re coming into land. Right? If, if you were sitting in, in a plane for example, and looking out the window, uh, the ground, uh, beneath you as a pilot is coming into land. As you get closer and close the ground, you’ll find that the image of the ground appears to move faster and faster, right? But that’s because the pilot is not slowing down. The B doesn’t do that. Pile comes into the constant velocity. The B slows down appropriately and every time to keep this velocity constant. And this automatically ensures that when it’s coming back very close to the ground, it’s flying with almost zero velocity.
Srini 00:29:53 So it doesn’t burn its feet when it makes contact with the ground. Cause a very nice low impact landing strategy. And the beauty about this is you don’t need to know at any time how far away you are from the surface. You don’t need to know, you don’t need to know how rapidly you’re approaching it. All you need to do is to measure the image velocity initially and keep that constant, adjust your speed to keep that content as you’re coming in to land and bingo, you’ve done it, right. There’s a beautiful biological autopilot to make a nice, smooth landing. Yeah.
Paul 00:30:22 It’s an elegant and very simple way.
Srini 00:30:23 Yeah it right. So that was on the horizontal surface. But what we then did was to look at landings on, on, on radical surface like this, where the has to come and dock, you know, for example, a flower or something, when you do a grazing landing on a horizontal surface, the image of velocity, the image motions, mainly from front to back and a in, in, in your eye, as you look down below, right in your vent field of view. Whereas if you, if, if you dock a fly, you approach a medical complaint, for example, a fly in a very complaint. Uh, what you see is a sort of an expanding sort of an images like watching a star wars movie or something like that. <laugh> different pattern optics. Well, so we also wanted to know whether, you know, what do I do do do a different strategy for landing on horizontal surfaces is what it’s supposed to vertical surfaces because the pattern optic flow is different.
Srini 00:31:08 So we tried to do this by having bees come in and land on a vertical sort of pattern. But the pattern in this case had, uh, has spiraled on it. And I’ll tell you what, the reason for the spiral. But, uh, first of all, we found that as bee came in and approached the thing, it again, did the same be behavior. So when it was, uh, far away from the spiral, it was flying, approaching fairly fast as came close and close again to the thing it slowed down progressively. And again, a nice linear list between approach velocity and distance to the actual landing target. Right? Mm-hmm <affirmative> so we thought again, okay, it must be something to do with image motion, but now the nice thing about the spiral, because you, these are using a kind of an expanding pattern, right? As they approach the target, that’s what happen when you approach it, uh, approach a target, uh, frontally, you see, uh, something expanding now when the nice thing about this spiral is you can artificially manipulate the rate of expansion by spinning this spiral in, in a way to make it to either expand or contract mm-hmm <affirmative> when you rotate this spiral to make it appear to expand, you are increasing the apparent rate of expansion that the bees experience has.
Srini 00:32:09 They approach the target. We find that the bees then hit the brakes and approach the target more slowly in order to maintain the original rate of expansion. On the other hand, when the spiral is rotated to create an apparent contraction, the bees approach, the target more rapidly, the, the simple experiment tells us that the landing beads are approaching the spiral in such a way as to hold the apparent rate of expansion of its image constant. So that simple experiment, uh, tells us that really are they they’re doing the same thing. They’re just keeping whatever it doesn’t matter, whether you’re approaching a horizontal surface or, or an oblique surface or a perpendicular surface. All you have to do is to look at that total optic flow, that surface is generating around the target that you’re going towards and keep that flow constant, no matter what it is, just keep that whole, that constant as you’re coming into land and you’ve done your job. So very simple, simple, elegant way of the, which we wouldn’t have even thought about until we started to, uh, look at these bees by doing a very simple experiment and measuring their flight <laugh> as they landed
Paul 00:33:07 Well, you, you know, a lot more about, um, uh, engineered flying, flying systems. And I, so I wanted to ask you about robotics in a second, but just as an aside, uh, you know, one of the things you were talking about, how when you change, um, their perception of how fast they’re flying or how, uh, slow they’re flying, et cetera, um, they will come and report through their waggle dance, uh, incorrect distances, right? Incorrect distances and directions. However, um, as you point out in, in some of your talks, it doesn’t matter if the other bees take that same route, they’re gonna have that same error. So it, it, so it’s a fine
Srini 00:33:45 Exactly. You know, so yeah. Yeah. So it, so, so if a, be flies to an open environment, if I’m that meters it’ll come back and signal some distance, but if fly, if it flies the same, uh, through a different environment, it flies the same distance. For example, through a forest, it’ll come back and signal a much bigger distance. But as it turns out, you know, all the bees that follow the dancing B will take the same route as the bee took, the original original scout B took. Right? So they’ll change the same environment. So, you know, it doesn’t matter if you’re a ruler you’re measuring yard stick or something is not perfectly calibrated because, you know, as long all, all bees use the same yard stick. It doesn’t matter. Everything cancels out. Right. Right. So that’s, that’s the thing I think. Yeah.
Paul 00:34:26 So it’s really tied to, uh, the perceptual capabilities and skills of in this case bees. What I was gonna ask you is, you know, there’s a lot of talk these days about ecologically valid studies and ALS uh, you know, like the different organisms relations, like the, the, their specific environment and what they’re evolutionarily honed for. So does this in your eyes, does this tell you that, um, what we are perceiving as organisms, uh, isn’t necessarily vertical as in the real world, but BA it’s just based on our perceptions. And if you and I share the same perception systems, uh, we’re gonna make the same errors, but they’re not errors to us because we’re following our own subjective, evolutionarily, honed, uh, ability. Yeah,
Srini 00:35:11 Exactly, exactly. You know, I, I, I would say that, you know, no organism perceived the whole world, uh, in its total reality as it is. I would say, you know, I, I don’t think there’s a totally true or complete representation of the world in any creature, you know, including humans, why offer that matter, even, even in the machine, because each animal experiences the world through the window of its own visual and procedural capacities, right. And it’s limitations, for example, insects have poorer spatial acuity than humans. So they, their business not as sharp as us. Uh, so there’s, but their visual resolution is, I think is only a factor of six factor, 60 poor than humans, but they have much higher temporal acuity. They can see the flicker in the floes and lamp, for example, which is flickering, uh, well in your country, it’s, uh, 60 in
Paul 00:35:57 It. So I think so
Srini 00:35:58 60, so 120 flashes per second, and they can, they can see that clearly, whereas we cannot. So we, our eyes are sluggish, but sharp <laugh>, especially. And there’s, there, there there’s are, are sort of not that sharp in terms of spatial resolution, but, uh, they’re very, very rapid responding. And so that they need that because they’re flying through dense environments and they’re very close to them. So they want to see lots of objects without blur when things are moving rapidly, past their eyes. Um, and <inaudible> can see in the ultraviolet, which we cannot, right. Uh, they can see patterns in flowers, ultraviolet marker patterns in flowers, which be used to lead them to connector. We cannot see that they can perceive polarized light in the sky, which we cannot. So the visual was, is entirely different from ours, you know? So, uh, uh, what is really real when it’s hard to be decide, right, is the Eagles have higher visuality than us.
Srini 00:36:44 They can see much, you know, much better resolution than us. So, um, yeah, so there’s, every person has their own world, I think. And even across humans, you know, we, I don’t know if you can say that all humans perceive and recognize objects in the same way. I mean, uh, both you and I, I could agree, look at a car and say, okay, this is a particular model of Ford, but you know, whether your brain responds in the same way as mine does, uh, I don’t think we know yet. Right. Right. And even, even if it doesn’t, it doesn’t matter. As long as you both agree. I mean, you have a certain pattern of activation your neuro. I say this a model Ford. And my says, okay, is that a certain pattern of activation, but I’ve to that, that is a model four. That’s right. So we bought the glitch and model four, but the representation might be quite different. Right. We can’t be sure.
Paul 00:37:26 And you’re red and my red who knows who, who knows if they’re the same. Right. <laugh>
Srini 00:37:29 Exactly, exactly. <laugh>
Paul 00:37:32 Well let’s um, do you wanna shift and talk a little bit about, uh, the robotics that you’ve been developing also, I, I know you have a few projects have been, uh, directly based on these optic flow studies in, uh, insect and navigation.
Srini 00:37:45 Yeah. Yeah. It it’s interesting. I mean, the way we got into robotics also was quite, uh, sort of accidental because, uh, you know, we were really look doing basic, be research. And then, uh, at one stage, uh, some, somebody from, uh, DARPA, uh, who happened to, uh, be at a conference where I was talking, uh, sort of approached me later and said, Hey, would you like to get some money to work on developing, uh, the biological inspired studies for flying machines? And, uh, it sounded like a good area. We didn’t go, didn’t go looking for it, but we sort got enticed into it. And
Paul 00:38:15 I, I really, I, I was imagining that, uh, you had that in your kind of back pocket the whole time coming from that engineering background.
Srini 00:38:21 No, it’s, it’s strange. Yeah. It’s strange as an engineer. I, yeah, it’s so stupid that I didn’t, didn’t even think about it. Uh <laugh> even, even the, uh, thing about navigating safely down the corridor and balancing the optic flow, uh, the, the first people to actually, uh, pick up that idea and use it to navigate robots where it was not people from, from our lab, it was people in labs, in Italy and in France and so on. So they, they were using it and they say, Hey, my God, why didn’t we do that? But, you know, we weren’t really, we were really looking for, uh, we just having fun with these insects. <laugh>
Paul 00:38:53 It’s enjoyable work, isn’t it?
Srini 00:38:54 Yeah, that’s right. I mean, we, we would, nowadays of course, I think the way research goes is that you tend to be more applications oriented. You have to be that way because you you’re the Christian and the grant application depends very much on yeah. How you cant, how, how, how relevant this research is going to be. That again is something was something that was not there in the old days, which is something I, again, missed very much. You, you
Paul 00:39:15 Have to either cure, cure disease or build a good robot or something like that.
Srini 00:39:19 Yeah, exactly. Exactly. Exactly. Bingo. You’ve got it.
Paul 00:39:22 <laugh> oh, I know. Yeah. So, okay.
Srini 00:39:24 Do something useful. Yes.
Paul 00:39:26 So it was the, it was the DARPA, um, DARPA money that, that got this off the ground, so to speak pun intended there. Exactly,
Srini 00:39:32 Exactly. Exactly. So we retained the DARPA for that, of course, in a way of working for military funding organization. We’re also involved in that with the us air force and, uh, uh, office of Naval research and so on, later on, but, uh, there’s good in that you always worry about whether, uh, you know, uh, what you do could be used, uh, right.
Paul 00:39:52 Uh,
Srini 00:39:53 You know, incorrectly and things like that, but it turned out it was a actually very good. The APA especially was really very, um, uh, very keen on sort of, uh, promoting basic research. They wanted us to publish our work in good journals. They were not trying to keep it classified, everything like that. So, uh, I really appreciated that. Hmm.
Paul 00:40:11 Well, one of the things that you, uh, have worked on, and I don’t know how many of these different robotic systems you wanna talk about, but one of them is like a, a, a buy winged plane, like a kind of a standard model plane, but then, but then you install kind of a fly navigation system using the principles. That’s
Srini 00:40:28 Right. That’s right. That’s
Paul 00:40:29 Right before, before, I’d like you to describe that, but so there’s this, there’s this kind of tired trope, right. Of, um, how much biological detail do we need to build into, for example, AI right into artificially intelligent systems. And often the example people use is, well, we, we didn’t, um, you don’t want to build wings to, uh, build flight. You wanna use the principle of airlift and aerodynamics, right? However, not all flight is the same, so sure. If you just want lift and propulsion, that’s all you need, but if you want fancy kinds of flight forward and backward, and, uh, if you want to navigate in certain ways, then you do need something closer to wings. But, um, so, so there’s this ever present, um, at least in the, in the AI world, right? That a lot of what we talk about is how much biological, um, detail do you really need to build in, but so how, you know, how did you decide what to build in and, and which systems? Yeah, so,
Srini 00:41:25 So yeah, yeah. Um, our, our sort of, uh, way of thinking about this, it’s a very good question there, Paul. Yeah. And the way we considered this was to say, okay, you, it depends on the task you need to, uh, accomplish. I mean, you don’t need to be necessarily slavishly biotic in the sense that you copy everything that you see in the insect, for example, you don’t need to build a insect compound eye with, you know, thousands of facets. Yeah. Um, you don’t need to have flapping wings. Uh, our, our idea was actually test to see whether the principles we extract from, from, um, honeybee, vision flight, and navigate and guidance, for example, do they really, can you use optic flow information to do all the things that insects do? So what we do is we, we, uh, put, put together a vision system to emulate the almost Panora vision of Antech compound eye at two, two ways, actually.
Srini 00:42:13 Uh, so we did this by placing two wide angle cameras back to back. So that gave you an almost near, uh, facing this way. One, one, this one side, one the other side. So it gave you nearly panoramic vision except for a small kind of blind zone in the back. Um, so that, that, that, that was the thing that we are using and we’re using standard computer vision techniques to measure the image motion, the optic flow. We were not trying to, you do it exactly how the intake does it, but we still don’t know by the way exactly how the intake measures true optic flow <laugh> okay. So, you know, we didn’t wanna wait for that to happen. We said, okay, this time we put on an engineering hat and, um, measure optic flow, uh, using the traditional well known techniques. And we were using this to control the flight to regulate the height of the ground. So this is a fixed w aircraft, as you said, with, with the, uh, nose cone, uh, the propeller and the nose cone was removed and mounted on the top of the wings to make room for the vision system, which is in the front. Okay. So you get a nice, clear view of the front. So the optic floor used to control the flight speed to regulate the height by the ground to compute the distance travel using. Barometry like, we were like, like bees do,
Paul 00:43:17 How, how big is this thing?
Srini 00:43:19 Uh, this, uh, it’s about, uh, I think the span was about a meter and a half.
Paul 00:43:24 Oh, okay. Pretty big. Pretty good size.
Srini 00:43:25 Yeah. Fairly big. Yeah. So we weren’t trying to Ize, uh, at that stage, we were just trying to basically see, you know, proof of principle kind of thing to see if, uh, these ideas actually work. And, uh, it seems to seem to work quite well. So the other thing we put in which we hadn’t done in our own research, but others had done is to, um, uh, use the horizon profile, uh, panoramic horizon profile to control the attitude, measure, monitor, and control the attitude of the aircraft. So you see, for example, if you’re flying and then you, you, you, the horizon appears to move up in your left eye and down in your right eye. It means you’ve, uh, rolled or banged to the left. Yeah. Mm-hmm, <affirmative>, mm-hmm <affirmative> and by west eye, whereas the horizon appears to move up in your, uh, in, in the front, it means you’re pitch your pitching down.
Srini 00:44:11 Uh, and if it moves down, you’re pitching up mm-hmm <affirmative> right. So, so you can, you can use the horizon profile or all around you to, to monitor and stabilize your, your attitude. Assuming that you’re not flying in a canyon or something where the horizon is not, you know, flat <laugh>, the world is not flat, right. And you’ve got, uh, things ticking up on one side or the other when you could get misled. But if you’re flying high enough about the ground, uh, the horizon is usually very reliable and it doesn’t, uh, nice thing about it is there’s got a, it’s got a reference, which does not drift with time. Unlike many of these inertial sensors, which actually errors accumulate the time because they’re integrating, you know, angle emotions and the noise tends to, uh, may cause all kinds of drift problems and things like that.
Srini 00:44:53 So we find that this could really be used to stabilize the orientation of the aircraft and it basically its role and pitch. And so also you can use the horizon profile to do various aerobatic maneuvers. So for example, all you need to do is to say, okay, I want the horizon profile, the shape of the horizon profile to vary from this shape to that able to that shape as a function of time. And you can do beautiful things like loops and roll. And I turns and all kinds of things, uh, just by using a horizon profile, we don’t have intakes to use it. We don’t know if insects do it that way. Uh, uh, they do lots of everybody numbers too. Uh, and sometimes they do it without the, without even the absence of a horizon. So they must be using other sensors as well.
Srini 00:45:30 But we found that at least the horizon we could do all these things, not just, uh, you know, go from a, to B and do a smooth flight, uh, but also do these, uh, interesting aerobatics. And the nice thing was that these flights are completely autonomous to kind of take off cruises, uh, control turns and landing back in the airstrip. And they’re done without using any external information such as GPS Orta. So you’re being entire self sufficient and self reliant, just the way an intake or a bird would be. Right. And so it’s a nice backup system for when there’s a drop drop off drop out of GPS or radio information and things like that.
Paul 00:46:04 Well, I was gonna ask how, how does this compare? I don’t even know how autopilot works. So like cur like a current autopilot systems, right? They’re, they’re using radars. Yeah.
Srini 00:46:15 To, to, to, to, yeah. To, to the best <inaudible> relying Tanya and GPS, the pilot, you know, basically takes the plane off. And then, and then they basically sit back and relax. And the plane is guided by GPS all the way through, uh, until it comes to land. And again, even the landing, uh, I, I have notes, some pilots, friends who tell me even the landing, there’s a landing beacon, there’s a radar. You know, it sort of, uh, helps you come down the thing, as long as you stay within the, the playing automatically states within the beam of a thing that’s being projected up from the ground, right? So it follows that beam down and it is so precise unless something goes wrong. I’ve been told it’s so precise that when it lands on the runway lines directly in the middle of the runway, so that there little cat size in the middle of the runway, they’re hitting against the nose wheel and they’re going bump, bump, bump.
Srini 00:47:01 So the only pilot intervention is to actually steer the aircraft likely away from the middle <laugh> wow. To get rid, to get to, to, to get rid of those bumping noise. So they’re so precise and, and, but they all rely on external information. You see, uh, so if something goes wrong, this is where I think something like what we’re doing could be helpful because you can then at least for some, some, some period of time, you can be self-sufficient long range. Navigation becomes a problem because you know, auto materials and all the things will start to build up. And you can’t really rely on that forever, but at least in the, for a short term, you can manage without any of these things that you’re normally very crucially relying on <laugh>,
Paul 00:47:38 So, okay. So that was the, um, fixed wing aircraft, but you also worked on like smaller, miniature, uh, smaller flight systems as well. Right? So
Srini 00:47:47 Quad drones and things like that. Yeah. So that was, um, mainly, uh, to, uh, to see if the same thing would work. And, and when you had, uh, things, things like, um, you know, vertical takeoff and landing, uh, systems, uh, and, and, uh, doing things like it was, it was easier to do this on a smaller scale because we were, uh, we were, uh, flying in a, you know, didn’t have to go to an, the Astro every time to, uh, run these, uh, tests. We could do it right on campus. And we were also doing things like, um, um, developing algorithms for detecting other moving objects in the environment, which is actually quite interesting challenge, uh, because, um, uh, you know, uh, we are so good at even when we’re moving, uh, we can detect small objects are moving environment, right? Like another car or a bicycle or something like that.
Srini 00:48:34 Um, and you cannot just do it by, uh, measuring motion because, uh, if you’re, if you’re stationary, uh, the world is limited, the world is stationary. And if something moves within it, you can pick it up when, but you’re moving. The image of the whole world is moving you’re right. And within that to pick something that’s moving and decide whether it’s really moving or this part of the world moving past you is quite a, quite a challenge for computer vision scientists and, and animals and, and humans are also so good at it. So we were developing algorithms that would sort of allow, um, sort of a detection of moving a movement of self, self moving optics, the detection of movement of self of the objects.
Paul 00:49:11 Oh, that’s right. You teach people how to creep up on other people without being noticed. <laugh>,
Srini 00:49:16 That’s the other thing. There’s emotion camouflage as well. That’s the other thing, uh, that’s the other thing that we didn’t probably didn’t get into the insect will camouflage our own motion, uh, when they there’s a lot of steal there. We did a bit of studying modeling of that too. I could talk a little bit about that if you, if you would like,
Paul 00:49:33 Yeah. I don’t know if that’s the work with the dragon flies that, um, uh, okay. Yeah. Yeah. Can you just describe that? Cause that’s really neat stuff.
Srini 00:49:39 That’s something we that’s something we, that’s something you put into aircraft as well, so just see you can get it, get it to work. Yeah. So the idea is very simple. So, uh, a couple of ways in which you can do it is, uh, uh, one is, uh, move in such a way that you, you, you are maintaining a constant angle of bearing with respect to the, uh, see if you’re, if you’re there’s a shadower and a shadowy, right? So you’re the shadower <laugh>. So you move in such a way that the line joining you with, with, with the shadowy always say that the constant orientation, if that’s the, then it looks like you’re an object at infinity because you’re not moving, you’re not moving in the eye of the, uh, uh, shadow. And so it no dangerous perceived. Uh, the other way you could do it is to actually pivot about, uh, a fixed point behind you, as you’re, as you’re sort of tracking the shadow. That way the shadow think is just a state object over there. Right? So, uh, it can’t be possibly coming towards me. And so, as long as you don’t get too close in your really any expansion cues in your, in your image, uh, you, you can shadow in that way. It turns out that hover flies do that dragonflies do that. And yeah, so’s a lot of that sort of thing going on. Yeah. Emotional camouflage, just recall it.
Paul 00:50:49 So they, they’ve learned through evolution, this natural mathematical relationship between which camouflages their motion and that’s how they intersect intercept, uh, and, uh, consume other flying objects essentially. Right. Exactly.
Srini 00:51:05 Exactly. Exactly.
Paul 00:51:07 Yeah. I’m just, it’s picturing humans trying to do that is, seems like a lot of effort to, uh, Camou fly it.
Srini 00:51:13 It I’ve experienced it myself, not in terms of someone shadowing me, but shadowing me. But if, if you’re moving along and driving somewhere like this, and, uh, there’s another road coming in from the site and the car’s moving on along the road, maintain that constant angle of bearing you not to notice it until it’s, uh, quite close.
Paul 00:51:30 Oh, it’s interesting.
Srini 00:51:31 People also done, uh, sacrificing experiments with human beings after we publish that paper. And then they, they, they found the same thing that humans can also be fooled by the same
Paul 00:51:38 Thing. Is that right? It seems like a, a fun thing to try, but, um, I’m not sure that that would be what time well spent. Um, <laugh>,
Srini 00:51:48 It’s, it’s steal is a very interesting thing for the military as, as, as we all. Oh,
Paul 00:51:52 That’s true. Yeah. See, this is where the nefarious purposes come in. So we, we should move on. But so, so far we’ve talked all about like the navigational abilities and flight abilities, uh, of insects, but you’ve also studied, um, quote unquote higher level cognition in, in bees mm-hmm <affirmative>. And I, I don’t, you know, before we, maybe you can give some examples, you know, like, well, you know, I can just list some off, for example, that, uh, be bees have working memory up to like five seconds. They can do delayed match to sample tasks, which is like a standard, uh, kind of task they can count up to at least I think, what is it, four and anything there’s like 1, 2, 3, 4, and then everything above four is another category or something like that. Right. Mm-hmm <affirmative> I wanna ask you though, like what your, so maybe before you give, uh, another example or, or, or your favorite example, you know, how, how far we’ve come in terms of learning about what bees, uh, and insects are, can capable of cognitively, but also your own, how you, your mind has, um, I don’t know, wanna say, I don’t wanna say changed, but developed in terms of thinking about bees capabilities and, you know, have you come to respect bees more over the years through like learning their abilities and, or, or what, how, how, how has your own outlook on bees changed over the years?
Paul 00:53:12 I suppose.
Srini 00:53:13 Yeah. Yeah, sure. Very, very good questions for Paul. So let’s start quickly by summarizing, uh, what we found in terms of the bees. Sure. That’d be great. Uh, perceptual capacities and, and then, then we could, uh, talk about yeah. Either the broader aspects that’s okay, sure. Yeah, sure. So, uh, you know, for a long time, uh, bees and other insects were really considered to be rather simple, reflective or Toons that would learn just very simple associations. For example, uh, the blue dish carries a food reward, the yellow dish, uh, no food. So you can train bees to choose the blue over the very quickly by the be to learn these things very four or five rewards is all, it takes five minutes. Wow.
Paul 00:53:49 <laugh> well, they don’t have long. They have to learn
Srini 00:53:51 Fast, say five minutes, maybe half an hour. They had to go back home and come back again. So, you know, okay, five rewards and half an hour, and they’ve learned the color, but it seems like so, uh, pattern recognition in bees has gone a lot beyond this over the last few decades, decades. So they’re, they’re not just learning, for example, with learning, um, shapes of flowers or something, or, or objects. They’re not really learning them in a photographic way, you know, a little, I way to say it, you know, pixel for pixel memorizing the, the, the content of the image, you know, pixel by pixel, they’re able to learn abstract properties of patterns and categorize them in more general ways as well. So for example, they can learn the concept of orientation, the concept of orientation in a rather general way. So you can train them to distinguish between let’s say, uh, horizontal and vertical, uh, random gradings, uh, and when say random, I mean, from time to time, every with the grading is, has as a different path, don’t know if black and white stripes, it’s not the same regular Stripe.
Srini 00:54:49 So, and, and so they learn that to do that. And then you can try and test them on other patterns, other oriented patterns, for example, um, Chinese oil gradings, or even single stripes, you know, mm-hmm, <affirmative> which possess the same, uh, orientation, uh, combination one is vertical other horizontal. So if they were rewarded on the horizontal, random grading, they will pick the horizontal Stripe or the horizontal Chinese artery grading, or the horizontal row of dots, as opposed to the vertical so that they can generalize this concept of orientation and learn it in a fairly general way and, and apply it to other objects they haven’t even seen before. Right. Uh, uh, that’s something there’s also one of the life study, so that, uh, B also possess, uh, things that you would call, uh, top down processing. So, uh, for example, uh, you’ve probably seen this, uh, famous picture of, uh, uh, a Dal against a spotted black and white background, uh, and it’s camouflage, right?
Srini 00:55:43 Because you cannot pick out the very easily. Uh, but what happens is that if you, once you given a, a cue and show, you know, a solid black sort of the dog, they never look at the same picture in the same way, right? I mean, because you, once you’ve seen that, that, that prior information in you, in your head pops out and sort of goes down and picks out the signal from the noise and it, because it knows what it’s looking for. So, and it turns out that bees can also be trained, uh, break camouflage in this way. So, uh, so, so this is the way we went was that we did was to, um, show them camouflaged patterns. Uh, so one, for example, could be a textured ring presented against a textured background, randomly textured background, the, both the ring and the background are textured, randomly section.
Srini 00:56:25 And the other case on the other side, you have a, of this, why ma you have a disc, uh, uh, randomly textured disc presentd against a randomly textured background. There’s some distance between the, the foreground, uh, target in the background, uh, wall. Uh, uh, so, so I I’ll tell you why, but initial, when you try to make them distinguish dis distinguish between these two things, they can’t learn it. They cannot learn it as all because both patterns appear camouflage, but then if you train them on, on UN camouflage version of the same objects, one is a solid black ring and the other solid black disc that against a textured background, uh, they can see these discs in very easily and, and they will learn for example, that the ring is the one to, uh, go for to get the food. And then you present these bees with the camouflage optics, and then immediately once the bees have been pre-trained, they will pick the camouflage optics right away, the camouflage ring without even needing to be further trained on it.
Srini 00:57:22 So they’ve learned the trick of breaking the camouflage, and they’re now using it. And not only that you can, now, you can now train them on totally novel camouflage objects. And again, they will learn to do that without needing to be pre-trained because they’ve learned how to break the camouflage. And the way to break a camouflage is to actually, when you’re coming towards the object, you move a little from side to side, and then, because the, to slightly closer to you than the, than the background, you get this motion para between the object and the background, and that makes the object pop out and reveal itself. And they sort of learned to use this map motion, camouflage to, to take advantage of, or exploit it. So you can train a B to look at the world in new ways, which you haven’t, which hasn’t done before. Right. <laugh> because normally they probably wouldn’t need to do it
Paul 00:58:09 Well. Right. Uh, that’s one of my questions was, um, so that speaks maybe to the capacity of, you know, what they’re capable of, but, um, maybe they wouldn’t necessarily ever use that in their own ecological.
Srini 00:58:20 Yeah. That, that’s a funny thing. You see lots of these things. That’s, what’s amazing me so much, because even though it’s not part of their natural, uh, in a requirement of the national history, they can learn these things. They’re a bit like a lab rat in the sense that, you know, you can make lab rats do things that you normally yeah. Yeah. You’re trying to test specific. They’re like that. Uh, and, and, and that’s, what’s really cool about them. And this is done by very small brain with a, in a smaller number of neurons and, and, and yeah, well,
Paul 00:58:46 That’s still, I mean, one aspect is just that it’s impressive that, um, brains have such high capacity neural systems have such high capacity, but do you, is it just a matter of pattern matching or there, is there some symbolic, uh, cognition going on there?
Srini 00:59:03 I think it’s symbolic because all of these things, for example, the, the simple task of orientation, for example, it’s not just simply a photographic pattern matching. Right. Because it, it really wouldn’t wouldn’t work if it was that it it’d have to be some yeah. Uh, generalization. Yeah. Um, uh, isn’t it again, uh, when they, when they, when you train to fight through mazes, for example, they can learn to, uh, uh, a simple way to guide them through a maze is to make them follow a symbol. That’s tacked on each one of the chambers through the maze, and you learn to follow the symbol, but you can do it in a slightly more abstract way by using the symbol as a guide post, you can say, okay, if the, if this wall Carrie, uh, is yellow and colored me, you gotta turn left. If it is blue and color, you’ve gotta turn. Right. So they’re using these things and not as guidepost, but actually, well, it’s sort of a guidepost, but much over abstract. It’s a symbolic, uh, kind of guidepost, right.
Paul 00:59:52 It’s at least a Tod abstract symbolism. Yeah. I don’t know if, how much, I don’t know how much we actually use symbols either. So there’s that question as well, but yeah,
Srini 01:00:01 Yeah, yeah. I’m, I’m not saying bees are super human, they don’t get me wrong, but it’s amazing what they can do. <laugh>
Paul 01:00:08 Okay. That was just a reality check there. I was just testing you <laugh> But, but you have like, come to appreciate be cognition a lot more and, and, you know, uh, respect the, be as an organism, probably through your studies. Yeah,
Srini 01:00:21 For sure. For sure. I mean, when you grow up as a kid, all you’re trained to do is to avoid bees because they’ll sting you. Right. I mean, <laugh>, I mean, uh, but we really, it’s amazing a bee doesn’t really is not aggressive in way. The only time it stings you is when it perceives a threat, because it dies when it stings you. Right. It bleeds to death. So it’s not in this interest to sting you. Anyway,
Paul 01:00:42 My daughter wanted me to ask you about B stinging, but <laugh>, I’m gonna refrain.
Srini 01:00:46 Oh yeah. When, especially when they’re forging they’re in, in a beautifully peaceful state of mind, all they want to do is to come there, get their food and go away. When, in fact, when they’re feeding at your feeder, you can even reach over and stroke their backs with your finger. And, uh, they, they won’t even notice they, they just blisfully drinking their, their shoe, or it’s only when they perceive a threat to the hive or something that they get aggressive and come then defend the hive. And that’s the only time when they, when they, when they sing.
Paul 01:01:11 But what I’ve learned from you is that if I accidentally threaten a hive and I anger a few bees, what I should do is take my fingers and move them really fast so that they kind of go further away. Right. That’s
Srini 01:01:23 Funny thing. That’s really, that’s one thing, but they also say that they get attracted by movement. That’s for sure. We also looked at the experiments. We haven’t published them, but they do get attracted to moving objects. So they, they, they, uh, one of the lessons that beekeeper tell you is to not move to freeze. Uh, when you, when you perceive a be about to attack you, you can actually hear the, the, the wing beat increases in frequency. So you can hear the raised picture that’s coming to get at you, but, uh, it’s, it’s instinctively very difficult to freeze at that time. You really wanna get the hell out of there, you know? And so it’s a very hard thing to do to just, uh, uh, just wait there and hope it’ll go away. But if it does get into your hair or something, they say the best way to get rid of that problem is not to really, uh, you know, rub your, like this and try to get it out because that gets him even more nervous and they will definitely sting your scalp. Uh, the best way to do that deal with that situation is basically wa whack your head and kill the bee, kill the poor pee. But, but that’s the best way to do it because it’s gonna die anyway. Even if it stings you it’s gonna die. Right.
Paul 01:02:18 So <laugh> right. Cope. Okay, well, I’ll pass that on my daughter. Thank you.
Srini 01:02:24 Oh, the other thing you should pass on to your daughter, that she can do all these experiments in her own backyard. He doesn’t even have to have a, he doesn’t even need to have a beehive,
Paul 01:02:32 Right. That’s cool.
Srini 01:02:33 She, she can just plain sugar water, a sugar water feeder in, in, in, in the backyard. And I can, and Peter water designs, if you wanna set them up. And, and, um, she can do all that. She, uh, somebody, a neighbor could have a beha and she could just, uh, mark some of these bees and the tox are like, and send some information how to mark them carefully and so on. And she can have a lot of fun training them to learn colors and this and that totally in the backyard. Oh, that’s
Paul 01:02:57 Awesome.
Srini 01:02:57 No, no. At no,
Paul 01:03:00 You gave me a month’s worth of, uh, of science lessons. I’m in charge of the homeschooling, the science part. I do dad science quote, unquote. Great. So this would be a really fun project. This is
Srini 01:03:09 Great. That’d
Paul 01:03:09 Be great. Okay. Um, <laugh> do you think it’s silly for, uh, neuroscientists to be studying higher level, higher level? Like, well, bigger brain animals, like monkeys, et cetera, when there’s still so much to learn from such small, you could say more tractable. Uh,
Srini 01:03:28 I, I think it should go in parallel, you know, I think it should go in parallel. I mean, the, the, the, the sad thing now is that I think some of these smaller creatures because of funding constraints and so on that they’re being pushed to on site it, and it’s hard to hard and how to define funding, research, finding from looking at these smaller creatures. And, and sometimes you may discover things as I say, some of the neural basis for some of these, uh, cognitive things, uh, because the B is, uh, the B brain is so small and it’s, it’s just about a milligram and it has only about a million neurons. And you compare that to a human brain has over a kilogram, I think, in weight and, uh, for a hundred billion neurons, including the glial Hills. It’s a lot of neurons
Paul 01:04:06 Right around there. And there’s no Neo cortex in a B either. Right? Yeah.
Srini 01:04:10 Yeah. For sure. For sure. Yeah. So the thing is, if you happen to, uh, you know, uh, hit upon something when you’re recording for somebody in neurons, uh, it may give us an insight about what’s happening across all a number of species, right, without being distracted by all of these other things that higher creatures have, which that’s the thing you see, you have the bare bones kind of stripped down version of, uh, several cognitive sort of capacities that are day and these simpler creatures, which you might be able to unearth. Uh, if you’re lucky, uh, in some of these simpler creatures, I feel <affirmative>,
Paul 01:04:42 Uh, we started off this conversation. You talked about how bees are these great learning machines, but of, but a lot of their behaviors are, are innate, right? They’re pre essentially prewired to perform a lot of these behaviors and humans, of course, come in. We have a lot of innate, uh, structure and abilities, but we, we are highly dependent on learning, you know, with a long just dation period and long childhoods and all that. Do you think that some of the innate abilities that are still, you know, some higher cognitive abilities like numerosity and some of the, um, symbolic, um, capacities that bees have, do you think any of that actually gets masked or, well, I guess I’ll just say masked in brains that learn, and then I don’t, I don’t remember what the word you just said, but kind of things get kind of messy or covered up or something like that in these
Srini 01:05:30 Yeah. Kinda obscured by, by other complicated things that are happening. Yeah. I suppose. Yeah. I mean, it, I’m not saying, uh, bees can do everything. I mean, they can do certain things like count up to four and a value work in Adrian di lab more recently, which is shown that be, can even add and subtract in simple ways. And they’ve even developed a concept of zero. You can train them to, to develop a concept of zero, nothing that, you know. Yeah. And so all these things are there. I’m not, they, they, they will driv humans by, by any means, but, and as you’re right, uh, the evolution has, pre-programmed a lot of the neural structure in them. And so for example, you can train a B to learn colors and, and, and I said, half an hour, right. With very few trainings, samples five. Whereas I think if I’m correct me, if I’m wrong, but monkeys take a longer time to learn simple task.
Paul 01:06:18 Like, oh my God, I don’t wanna talk about that. Yes. It takes forever
Srini 01:06:20 Because, because the explanation I’ve heard is that monkeys are saying, wait, this cannot be that simple.
Paul 01:06:25 Right. There’s
Srini 01:06:26 Gotta be something more complicated. And so they’re trying to figure out the really complicated solution or the reason for why they’re telling why the experiment is trying to make them do this silly thing.
Paul 01:06:34 Yeah. You’re, you’re, you’re raising my blood pressure right now, my through memories here. So
Paul 01:06:41 Yeah, Sereni my, I, I mentioned my daughter. Um, I haven’t talked about my son yet. Uh, I, my son, I recently had to have a conversation with him because I found out we, we have in Durango, I live in Colorado in, uh, the United States and where we live, there’s this infestation, everyone has them in their house houses. They’re called Elm bugs. Um, and they’re kind of harmless, but they’re just a nuisance. And I found out, uh, one day my son was, um, plucking the legs off of the Elm bugs, right. While they’re alive. And so we had to have a conversation about how that’s not cool, not an okay thing to do to other organisms because they might feel pain and they might be suffering and how that would be a problem. Uh, what are your thoughts about pain? You know, the perception of pain, um, in insects and other like unquote lower animals.
Srini 01:07:30 Yeah. Yeah. So it, it, it, it’s a very interesting question that you bring up all, and it’s a controversial question, difficult one to, you know, do be experience pain, for example, or, or any other insects or invertebrates, uh, you know, if a dog Yelps, when it gets stung by a wasp or something, we are sure it’s felt pain. Right,
Paul 01:07:47 Right.
Srini 01:07:47 But if an insect finches, when you product it with a pin, we conclude that this reaction is simply a reflex because Hey, Inver bit cannot possibly fail, feel pain. Can they, you know, for some reason we seem to live ability to sense pain to its perceived intelligence, you know? And I always wonder why, why do we make this default assumption? Why should the two be linked? I mean, it seems to me that the experience of discomfort caused by anoxia stimulus, uh, does not require any particular level of consciousness. I feel
Paul 01:08:18 It doesn’t require consciousness. So wait so sorry.
Srini 01:08:21 Exactly. Exactly. It doesn’t live quite any high level. Something is unpleasant. You, you, I mean, you could again say it’s a reflex, right? You could say, okay, people trained flies to avoid the heat to he chambers and you move the cool quality chambers probably know that they’re avoiding the heat. And you could say some say, okay, that’s simply a reflex, but even if it’s a reflex, it could, there’s nothing to say. The cliche does not feel discomfort. Right. <laugh>
Paul 01:08:44 But doesn’t that imply some subjective experience. Discomfort is a subjective conscious
Srini 01:08:50 Depends on how you define how you define discomfort. Right. I mean, yeah. I found the feel if, if I, if I know that someone’s going inflict pain on me and I can predict it, I can see that’s a conscious tends to be a conscious experience, but, you know, if, if, if, if something happens, I get stung by something, uh, uh, without knowing what it is, uh, I still feel the pain. Right. Yeah. Uh, and, and, and that’s where I think, you know, um, we should be a little more careful. I think I default the, some should be that all creatures feel pain and then unless proven otherwise <laugh> is my, is my feeling, you know?
Paul 01:09:22 Yeah. Well, I know, I think that, that, I don’t know. I don’t know why when we’re young, like my son, like, it’s not a, you, you really have no, <laugh> no empathy for other organisms essentially. And now I feel bad cutting a branch off of a tree or something.
Srini 01:09:36 Yeah. It, it extends to everything. You’re right. I mean, it could be, plants are also sent in creatures say, and, uh, where do you stop? Right. I mean, that, that’s always the problem, my ugly completely. And by the way, with, with fish, for example, for a long time, you know, um, there were no guidelines for working with, uh, you know, um, fish and other cold blooded EB bits. And, uh, totally, I think, uh, maybe a ago that, uh, there was a paper in nature and typically it was a very, uh, we think about it was a fairly simple, uh, almost simple mind, so simple minded. I’m surprised that I published, published in nature, but anyway, so they, they took a, they, they <laugh>, they took a B and then made it sting thing of fish in the table. Oh God. And, and then the fish just switched its tail.
Srini 01:10:19 And they finally, oh my God, this feels pain. Oh. You know, and, and, and it was published. And then, you know, from then on, it changed the it’s just a matter of when the world is ready to accept that thing. I think because of course the, the, uh, the fishermen and the anglers were very upset by that. Yes, of course. Because they, they always like to believe that there’s no pain. Yep. But, um, once, you know, the world is ready to accept it, I think it gets accepted much more, uh, quickly. And in fact, what we were trying to do, uh, was to try to see whether, you know, bees feel pain or not was something, uh, I thought was far more sophisticated. And it was wasn’t our own idea. It was borrowed from someone in England who did some work to investigate, um, pain in, uh, in chickens and what they was.
Srini 01:10:58 They, they, they jabbed, uh, wounded one of the legs of the chicken, a bunch of chickens. Uh, and then the other control group was, uh, you know, UN unwanted. And then they, uh, gave, uh, both these, uh, groups of chickens, a choice between two feeders, one was a normal, the normal food. And the other one was the same food pellets laced with some, uh, painkiller, like ibuprofen mm-hmm <affirmative>, you know, and then it turned out, it was only the wounded chicks only there. So the preference for the food that had the painkiller, huh. Right. And it wasn’t like the, uh, the, the painkiller tasted good and they preferred it that way because the UN wounded things did not show a preference that way they’re going randomly to both of them, 50 50. So this is more than just a simple, reflective reaction to a, you know, a jab or something.
Srini 01:11:44 It’s really something I said, Hey, look, I find that this thing makes me feel more comfortable. It really, my discomfort, right. When I eat this and I’m going to eat this, so it’s a much more subtle way of investigating it. And, and you know, this is what we tried to do with bees, but unfortunately, um, the results were there a slight difference. The, the, we used a pain killer, which is morphine. We didn’t know what to use, but one thing, we tried to use several things. Uh, but they showed the, the wounded bees showed a slight preference of the morphine, but it wasn’t statistically significant enough as to make, make a, make a claim about it. Mm. So we published a paper saying we really don’t know, there’s no statistical differences, but here, here, here it is. And that’s all we can say for the moment. It’s possible that we either had the wrong, wrong anesthetic. We do really don’t know what works well for bees. We still don’t know we had to try several things. So, uh, it is still an open question. I feel, you know, um, yeah. What for, I say, yeah, just more workers needed. Just, I think few more workers needed. I don’t think it’s, it’s just the beginning of a long story. I feel
Paul 01:12:40 That, that wasn’t a nature paper. You have to have a bee sting of fish to have a, a nature paper. So it’s, I,
Srini 01:12:47 If you think about it, that was so simple mind. I
Paul 01:12:49 Know. That’s great. It’s great.
Paul 01:12:53 But yeah, like when you, you were talking about fish, I mean, I remember I, I grew up, you know, being taught, oh, you know, I would go out fishing with my grandparents. Well, fish don’t feel pain and my grandfather would clean. ’em alive, you know, would just filet them alive and, oh, it’s fine. They don’t feel pain. And, but on, on the other hand, and, and we just mentioned, you know, the, the chickens and, and bees and different examples. On the other hand, there is, like you said, it’s a tricky question because the experience that they are experiencing, the sub the subjective experience, there, there are likely different, um, gradient of consciousness and uncomfortableness. And I, what is your conception of be consciousness, if you, if you had to guess, is there a rich there, or is there a little flicker of consciousness or, or what,
Srini 01:13:39 I, I don’t think it’s as rich as humans, but, you know, let me give you just one example of some lovely work that, um, chap the name of James ne uh, did, uh, and he, he found that, um, have you heard of headbutting, um, uh, bees headbutting other, uh, bees while they’re dancing? So, uh, what he found was that, yeah, I did. Very interesting. So he found, this is about, about, this is almost about 10 years ago. He found this, um, that, uh, it would be, uh, notices that another bee is, uh, dancing to signal a particular food source that this particular bee has been to and encountered some danger. For example, in the form of a lurking spider, that’s kind, kind of attacked and wounded it, then this B observing B will headbutt this dancing B and stop it from advertising, that particular food source and only a B that advertising that particular food source will be headbutted.
Srini 01:14:30 And from dancing, I mean, to say, all of this is just a pure instinctive reflex seems, uh, really a bit difficult to believe, right? Mm-hmm <affirmative>, and also the, the, the probability of headbutting increases with the severity of the injury that this BS experience after coming back. Hmm. So if it thinks it’s really dangerous, it’s more likely to do the headbutt. So he did some control, you know, uh, lovely Saudi. We control the sort of ES of the legs of these bees and release them back home, and then look to see how they behaved in terms of the headbut. And there was a nice relationship between, between the two sort of severity of the injury and the tendency to headbutt. I mean, if all this can happen, it is hard to believe all this, just a simple pre-programmed roof reflect. And it’s just that particular food source. If it’s, if be signaling that particular distance and direction, that signal, that particular food source says, Hey, it has the same order as well. It says, Hey, don’t do that because you’re putting the whole colon in danger. Yeah.
Paul 01:15:25 Often I have guests who are using deep learning, uh, networks as models of brain activity and brain function. And I know we, we haven’t talked about deep learning at all. And, and you, you don’t use deep learning in your, in your navigation systems, right? You haven’t have you dabbled in that? No,
Srini 01:15:40 No, no, no, no. Uh, not, not yet. Anyway. No, no, no.
Paul 01:15:44 Any interest do, do, do deep nets feel pain and are they conscious? <laugh> just, But, you know, if
Srini 01:15:51 They, if they, well, if they rip, that’s the thing, isn’t it, the philosophical question, if they replicate all the behavior that, uh, you know, a little bit cliche would, would exhibit, if it, if it was subject to some ocular stimulus, then I suppose you could say the field pain. I mean, that’s the thing is how can you, yeah, I dunno. It’s hard to decide one way or the other isn’t. It really is. Yeah. But I’m not an expert in deep learning, but I can’t have wondering whether, you know, learning in bees and other simple creatures involves at least in some, some, some process that might be simpler and faster. It’s true there, as you said, uh, a lot of what would be, uh, has learned or can learn is partly due to, um, evolution, which is sort of fine tuned. And hon these neural circuits, that’s kind of a deep learning process.
Srini 01:16:32 That’s gone through, uh, you know, several, um, several thousands of years of evolution. Um, but still, you know, as we said, a B can learn things very quickly, what it needs to learn. And even some things that it doesn’t need to learn novel things very quickly. It doesn’t need millions of training samples, right? I mean, uh, unlike most, uh, you know, deep convolutional, uh, networks, it, it, it, you know, orders are learned with two or three rewards colors in half an hour patterns in half a day, navigational roots, you know, one or two flights. And the bees learnt the root, uh, you know, for, for its entire life. And they can learn new roots very quickly, all this happening, very rapidly, uh, bees, uh, people show that bees can recognize human faces, uh, Adrian DI’s work. Uh, one of my colleagues showed that bees can be trained to distinguish between, uh, paintings by Monet and paintings by Picasso.
Paul 01:17:24 Again, probably not part of their ecological needs. Yeah,
Srini 01:17:27 That’s the thing you see, it’s all novel things. Uh, and someone large, large show that beast can be trained to play golf. This is manipulating a, a, a, a ball to a hole, and then they get the ball to the hole. Uh, they get a reward, uh, all these things happening and they learning it very quickly. See, so,
Paul 01:17:43 I mean, but the deep, deep learning is, is that’s, that’s OB one of the obvious, um, one of the first things people talk about of OFS shortcomings is that it just takes so long to learn and, and humans as well have one shot and few shot learning, but also a lot of what you talked about just in the, um, the optic flow, like, is it basically a simple control system, right. Which is like more of a cybernetics engineering standard kind of control system, but those aren’t necessarily learned, I suppose
Srini 01:18:09 They, they’re probably, uh, they’re probably hard sort of hard wided I would say, yeah. Some of the basic control systems, the flight control mechanism and so on are probably, uh, hard wide E even the B dance from what I understand is, um, it’s really, it’s the basic ingredients are there, like, basically like the gate is, is programmed into, into human infant. It’s probably part of the P programming thing, just walking, you know, the gate. Uh, but, but it’s fine tuned, uh, after, after, after birth, but the fine tuning happens very, very, very quickly. I, I think I I’m don’t get me wrong. I’m totally in awe of the performance and machine learning algorithms and they, this is important, right?
Paul 01:18:46 The, the, everyone has to say this first, when they’re about to criticize it or say something slightly, believe me. It’s very impressive, but so go, okay, now go ahead. Yep. Yep. Standard.
Srini 01:18:57 Well, the, the thing is, yeah. What, the one thing I find a bit unsatisfying about, uh, the learning we seen approach deep learning is that while they work beautifully at the task that they’re supposed to accomplish, we do not. It seems to me, and again, I’m ignorant. I really don’t know of the subject. So I, maybe I shouldn’t be talking about this, uh, you know, with any authority, but we don’t seem to have a good idea idea of why they work so well. Right. For example, what is the nature of the computation that’s being performed? What are the kinds of information that are being extracted by the neurons in the various layers? We’re still largely in the dark, it’s like a black box here. You feed in millions of examples through a network and you turn rank and outcomes, a beautiful result, but the network is still largely a black box. And I find this little unsatisfying as a scientist because I feel the goal is the full goal is not achieved. And I remember that in the, in the late eighties, when neural first became popular, uh, their study was considered to be a, be by many, to be a, a non-science.
Paul 01:19:51 Uh, what did you think of them? Do you remember what you thought of them back then and, and compared to what you think of them now,
Srini 01:19:57 I always had this impression about, uh, about, uh, you know, it not being, uh, not being totally satisfying. I’ve always had that. I think I’ve come to appreciate it more as a utility, as a tool, you know, I think, you know, self-driving cars and all this sort of thing. I it’s fantastic. So it’s great for, uh, engineering applications. It just great. We still don’t know how, how it works and that that’s the problem. So, uh, in the old, early, in the old days, uh, I remember stories where people were saying, uh, you know, you’re a young faculty member, um, applying to work on your own networks. They would say, don’t do it because of danger, poor possibility. If you won’t get tenure, because you don’t consider it to be a science. Right. Right. I dunno if you’ve come across that. Uh, uh, but, but, so it’s kind of a, a sledgehammer approach where you blindly turn this crank without really understanding what is doing. That’s what I find is missing, maybe come, or maybe I, I don’t know, maybe it’s already been understood and I don’t know.
Paul 01:20:50 No, no. I mean, there is a lot of deep learning theory, but, but there is a lot ESP in neuroscience, a lot of comparing the activities in some way, whether you’re looking at whole population activities, or sometimes even individual unit activities with networks in the brain and finding matches between those, I mean, the, the jury’s still out on the explanatory power of that and like how much that really buys us in terms of understanding, but they’re, they’re not com you know, they’re not black boxes in that sense.
Srini 01:21:18 Yeah. If, if you do find, uh, units in, in the artificial network that, uh, look very much like, uh, show responses, they’re very similar to what the cortical neurons are doing that great. That, that, that, that, that, that, that’s fantastic. That, that at least gives us an explanation of why, why this circuit is designed that way and, and what it’s doing more importantly. Right. Mm-hmm
Paul 01:21:37 <affirmative> you said you, you called yourself kind of an armchair advisor at this point, but do you see within, you know, the work that you have done over the years, do you see when people are carrying on that, that work and advancing, it are little excited about using the deep learning approach, or is it really more of a, a control system? The
Srini 01:21:54 Duty is getting very excited. I mean, the, some of the students I’m trying to help and advise now, uh, uh, are, are really getting onto, it seems to be the, uh, yeah, the flavor of the month. Right. Everyone wants to use it and they want, and I’m not, I’m not gonna try and stop them. I’m just gonna say, Hey, okay. In the end, it all makes sense to you. Can you please open the black box and see what it’s doing? Uh, to me, it’s not, satisfac you just to say, okay, I’ve got this, uh, network that, you know, that replicates the trajectory of a, B it fly through a course of obstacles. Uh, I would like to know a little more about how it’s doing it and why it’s doing it.
Paul 01:22:24 <laugh>, Sereni your best guess? How long is that flavor of the month gonna last?
Srini 01:22:28 Oh, you never know, are you, it it’s kind had a revival, isn’t it? It was there in the eighties for a while, eighties, early nineties, and then it disappeared. And it came back again with a, with, with a punch, probably because the increases in computing capacity. So now you can put in, you know, a thousand layers and, uh, uh, quarter a million neurons in each layer and yeah, yeah. Click the switch and off it goes. Um, but I also, I don’t know what the next step is. If, if that goes outta fashion, what is the next step? Uh, I don’t know. Is it back to basics? <laugh> that’s
Paul 01:23:01 Yeah. Yeah. Well, yeah, there’s it’s ever forward. So here, but I’ll end on this question. How long until we have a fully, whatever this means, a fully satisfying account of B cognition from neurons to behavior from the nervous system to behavior?
Srini 01:23:20 Well, probably, uh, probably, uh, when we, uh, are able to, I would say record from each of the individual relevant neurons, you know, and, uh, I mean, some people are getting to it Janelia farms, as you probably know, uh, the Howard he in student farms they’ve, uh, sort of, uh, got the, uh, kind of a blueprint for the, uh, the gold is to get the blueprint for the entire, the nervous system of the, uh, fruit fly, for example, fruit fly. I dunno if they’re planning to record on each one of those, uh, the one thing about genetics, molecular genetics is that it allows you to dissect the system and, and, and, and delete certain parts of the sort of system and, uh, say, get, get an idea of how, which circuit do what, but how they do it again, I think requires elective really to find out how each neuron responds, uh, to find out what, what com computation is doing.
Srini 01:24:10 And that, that, that’s where I think molecular biology doesn’t, uh, doesn’t have sometimes I feel, and there’s a lot of interest in funding for molecular biology, which is great, which is nice the tool, but I find the electrophysiology should also be, uh, really be funded more in more enthusiastically. I feel the behavior in the electrophysiology, I think are really ultimately, what’re going to, uh, tell you what the insect really doing. For example, electrophysiology might tell you that, uh, a neuro response is something interesting, but if the animal does not use it right, uh, behaviorally, then again, that’s that ultimate test behavior, isn’t it? <laugh>,
Paul 01:24:45 Yeah. This has been a lot of fun for me. I, I really appreciate the conversation and, uh, you’ve, <laugh> en enlightened my world about be cognition. So I appreciate all, all the work in bees that you’ve done. Thanks for being here.
Srini 01:24:58 Thank you so much, Paul. Really nice to talk with you. Thank you. Again,
0:00 – Intro
3:34 – Background
8:20 – Bee experiments
14:30 – Bee flight and navigation
28:05 – Landing
33:06 – Umwelt and perception
37:26 – Bee-inspired aerial robotics
49:10 – Motion camouflage
51:52 – Cognition in bees
1:03:10 – Small vs. big brains
1:06:42 – Pain in bees
1:12:50 – Subjective experience
1:15:25 – Deep learning
1:23:00 – Path forward