Check out my free video series about what’s missing in AI and Neuroscience
Support the show to get full episodes, full archive, and join the Discord community.
Rodolphe Sepulchre is a control engineer and theorist at Cambridge University. He focuses on applying feedback control engineering principles to build circuits that model neurons and neuronal circuits. We discuss his work on mixed feedback control – positive and negative – as an underlying principle of the mixed digital and analog brain signals,, the role of neuromodulation as a controller, applying these principles to Eve Marder’s lobster/crab neural circuits, building mixed-feedback neuromorphics, some feedback control history, and how “If you wish to contribute original work, be prepared to face loneliness,” among other topics.
- Rodolphe’s website.
- Related papers
- Related episodes:
0:00 – Intro
4:38 – Control engineer
9:52 – Control vs. dynamical systems
13:34 – Building vs. understanding
17:38 – Mixed feedback signals
26:00 – Robustness
28:28 – Eve Marder
32:00 – Loneliness
37:35 – Across levels
44:04 – Neuromorphics and neuromodulation
52:15 – Barrier to adopting neuromorphics
54:40 – Deep learning influence
58:04 – Beyond energy efficiency
1:02:02 – Deep learning for neuro
1:14:15 – Role of philosophy
1:16:43 – Doing it right
Transcript
Rodolphe 00:00:03 Maybe it was the first time where I got such a sense that, um, science is historical and this is something that is very close to my heart. Mm-hmm <affirmative> um, and, um, yeah, you know, it’s really a human adventure. My view is that, you know, the digital age is just a moment, another moment in history. And that’s very soon we are gonna go back to something which is much more mixed, but I think we see that happening already at the end of the day. The only thing that makes it possible for you to work in isolation is to believe in what you’re working on.
Speaker 3 00:00:50 This is brain inspired.
Paul 00:01:03 Hello everyone. I’m Paul on today’s episode, I’m joined by Rudolph SEP CRI. So Rudolph is a control engineer and theorist at university of Cambridge. And today we discuss a range of topics around his work using control theory, specifically using a mix of positive and negative feedback control to model and build circuits that mimic the mixed digital and analog signals in our brains. So neurons, uh, they’re special because they spike. They emit action potentials, right? They’re digital. Uh, much of neuroscience is devoted to counting spikes, doing statistics on spikes and correlating spikes with behaviors. Uh, I counted spikes during most of my career spikes inspired the mycolic pits, artificial neurons, which would output a one or a zero binary digital, which led to well, you know, deep learning, but spiking isn’t all neurons do their membranes have continuous analog voltage signals that are usually considered only in relation to whether a neuron will or won’t spike.
Paul 00:02:09 But that membrane voltage is sensitive to things like neuromodulators, ion concentrations and so on, and to Rudolph the mixed digital and analog signals of neurons, uh, remind him of good old fashioned control theory and specifically a mixed feedback kind of control mixed, positive and negative feedback control. So he has been modeling single neurons as mixed feedback control systems and began building up to small circuits of neurons that are further regulated by neuromodulation. In fact, much of his work has focused on modeling a smallish circuit of neurons in lobsters and crabs called the Stamato gastric ganglia, uh, or S STG. So you may remember Eve martyr from episode 130. Who’s been studying the SDG for many years and famously showed it can function within a normal range despite wide variations in environmental conditions, despite individual neurons behaving quite differently. The system as a whole remains steady.
Paul 00:03:13 It turns out Rudolph’s, uh, mixed feedback control approach may be a great way to understand how the STG does this. Anyway, we talk about all that we discuss Rudolph’s recent plunge into building these principles into neuromorphic chips, which he sees as the future and kind of the present. We discuss a little control theory history. Uh, it turns out the mixed feedback control has always been there even though since the early days of cybernetics, we tended to focus on negative feedback control. We talk about how quote, if you wish to contribute original work, be prepared to face loneliness and a handful of other topics as well. If you like this topic you may want to visit or revisit, uh, episode 119 with Henry yin and his views on what cybernetics got wrong and his perspective on control, theory and brains, uh, and or episode 130 with Eve martyr, whom I mentioned before, if you wanna dive deeper into Rudolph’s work, go to the show notes at brain inspired.co/podcast/ 143. If you want to express support for the brain inspired podcast, go to brain inspired.co and check out Patreon or my online course all about neuro AI. Uh, there’s a free, short video series that you can sign up for. I hope you’re all doing well. Staying in control, sometimes getting a little outta control, perhaps all here’s
Paul 00:04:40 Rudolph are, are you, do you consider yourself a control engineer? Do I have, if I asked you what you do, would you say you’re a control engineer?
Rodolphe 00:04:48 I would be flattered because, uh, many people think that I’m, I’m sort of AIAN, but, uh, I think that control is really an engineer in science. Yeah. It’s about building things.
Paul 00:05:01 Right. Okay. So that leads directly to my next question. Well, maybe first what we, then I would ask what a control engineer is. How would you describe what a control engineer is?
Rodolphe 00:05:11 Right. That’s a good question. I often joke that, um, control is to engineering. What philosophy is to human? So many people, I mean, on the surface, you might think it’s pretty useless, but at the end of the day, <laugh>, um, it is a foundation of any engineering developments. I would say.
Paul 00:05:33 Why would you think that I, I know it’s a joke, but why would you think that it’s useless? Does, does control? I know that the history of control has ups and downs right within the engineering field, but it seems fairly central.
Rodolphe 00:05:45 It depends on, on, on the place. Um, but certainly, you know, if you speak to control theories, nowadays, many of them, I think have a little bit of, uh, difficulty placing themself with respect to striving fees, like machine learning, um, optimization. And, but I think that’s control is really great and very important, especially nowadays. Um, and, and perhaps the reason why some people might think it’s useless is because all the central concept of controller, like for instance, feedback are almost daily life concepts. And so you wonder whether you really need a theory for that. And, and then when you start doing the theory, all sounds very mathematical and it looks like there is a huge gap between, you know, the counter problems that you start introducing in a basic course. And then the mathematical theory that goes to address those questions. But at the end of the day, I think it’s one of the few courses in an engineering, uh, curriculum where you go the whole line, the, the whole way from, um, I mean, I would say you, you see the value of abstraction that you see the value of abstraction, but to address very concrete questions.
Rodolphe 00:07:11 I think that’s perhaps what I like about control.
Paul 00:07:14 Well, you mentioned machine learning, is there no room for control theorists and machine learning at the moment?
Rodolphe 00:07:20 Of course, of course, but there is a temptation to become a machine learner, you know, and I think the challenge is to, oh, wow. Yeah. Um, bring your expertise. And I suppose to trying to, you know, to become someone different <laugh>, um, at the end of the day, I think we need all expertise. Um, and all expertises are valuable, but it’s, there’s always a balance between being sort of faithful to your expertise, which I think is where you can make a difference if you really, because this is your competitive advantage at the end of the day. Um, and of course you want to use your expertise, but you also want address questions. And so that’s you as
Paul 00:08:13 Is neurophysiology reaching out for you.
Rodolphe 00:08:20 Um, <laugh> where to start <laugh> but, um,
Paul 00:08:25 Well let me ask you this before, before we go on, because do I have it right? That I, I think I read in one of your editorials that it was, was it James gleeks, uh, book that drew you into control in the first place? Can you just tell me that background story?
Rodolphe 00:08:43 So do you know about this book? So this was a book of chaos and, and celestial mechanics and, and I mean the whole history of science starting with Newton, and then finishing with point and, and dynamics in the 20th century, I found this absolutely fascinating. Maybe it was the first time where I got such a sense that, um, science is historical and this is something that is very close to my heart mm-hmm <affirmative>. Um, and, um, yeah, you know, it’s really a human adventure, a across centuries sometimes, and, and it’s fascinating to see that at the same time. It, it, it, it progressed very slowly, but at the same time, it’s very connected. And, and, and, um, and I think that chaos was the, you know, was the sort of the, um, deep learning of, to date was very, it was the hype in the eighties.
Rodolphe 00:09:33 It was a, a very, um, sort of novel, um, way of, of linking the deterministic and the, the stochastic. And, and so it, it, it raised many hopes and many expectations. Um, so yeah, that, that, that, that was all very attractive to me. Um, I’ve sort of moved away from, um, dynamical systems because I’ve realized over the years, that dynamical systems was a sort of a very often an obstacle to think of questions of neuroscience. And, and in my first steps in neuroscience, I was always told that, you know, the only way for an engineer to approach neuroscience was neurodynamics. And, uh, and I think that many people still nowadays in neuroscience have this view of the brain as a dynamic system. And I, yeah, I think that this causes a lot of difficulties, at least to me, it has been, the journey has really been from dynamical systems to open systems systems that interact with their environments. And, and this is why I think that the brain is so much about control.
Paul 00:10:47 It’s interesting that you say that maybe we should pause here for a moment and, and think about that more, because right now, at least from my perspective, there is, uh, a resurgence of dynamical systems theory, at least, uh, at the large population of neuron level. Maybe not so much at the single neuron Hodgkin, Huxley, uh, you know, modeling a single neuron, but thinking of, um, the PO the activity of populations of neurons and how to tie those two cognitive processes, dynamical systems theory seems to lend itself fairly well to think of these high, you know, to take high dimensional activity, reduce the dimension of that activity and see what kind of landscapes and attracts, uh, you can glean from that activity. Is that the kind of di dynamical systems theory approach that you’re, uh, that you have come to think is not as valuable.
Rodolphe 00:11:40 Yes, but of course don’t get me wrong. I mean, I think it it’s of course, um, very good that, um, people wish to acknowledge the dynamical nature of the brain and our <inaudible> behaviors. So it’s, there is no question that the, you know, activity in the brain is very dynamical. Let me, and so when we want to model the brain, we need dynamical models. However, there is a, a beacon underappreciated distinction between closed and open that systems. So I think we are still very much influenced nowadays by, you know, Newton’s description of, of, of celestial mechanics. And so this is a, what we call a closed system. Uh, there is no interaction within environment, it’s all planets moving light clocks. And I think that this, this view is very difficult to transpose to the brain, because I think of the brain as really a machine that the main role of which is to interact with its environment.
Rodolphe 00:12:43 And when we, once you have system that has sort of input and outputs, um, I think that the de system has strong limitation, and it’s very important to acknowledge the open nature of those dynamic systems. And this is very much the field of control. So this is more or less moving from science to engineering as well. And, and, um, yeah, often, you know, when it comes to the methodology, you know, we have to acknowledge that there is a, the methodology of systems is at least four centuries all. So we have a lot of tools to talk about dynamical systems. Um, control theory is much more recent. And so we have much less tools to talk about open systems and interconnection of open systems. Yeah.
Paul 00:13:35 Maybe we should just dive into con into the mixed feedback, uh, signals that you’ve been working on, but maybe before that, even, uh, just to really zoom out, how would you describe what is the <laugh> grand vision, or what is the, the, the goal or project that you see attacking
Rodolphe 00:13:56 The short answer would be building a brain <laugh>, um, which is of course slightly provocative. But what I think is important there is building versus understanding. I think I see a lot of people in neuroscience considering that the ground question is to understand the brain and to understand how the brain works. I think I moved away from that question, perhaps it’s too hard for me <laugh>, but I think that, and perhaps this is my sort of engineering, uh, background that is now speaking, but I find that when you want to build the machine that, you know, approaches the behavior of the brain, it’s sort of very concrete and, and it’s also perhaps sort of a bottle up because, you know, um, it’s gonna, yeah, it’s gonna be a long way before we build the brain a brain, but, but I think we understand what, what, I mean when I say I want to build the brain, I want to build a machine that is closer to the brain. That’s the machines that we currently have.
Paul 00:15:00 Why?
Rodolphe 00:15:02 Well, um, again, that’s very historical because if you go back to the early days of contar, which we call cybernetics, um, which I think was a, a great time, that is in a sense, very similar to the current time. So we are talking about, uh, the late forties, right? I’ve talked to second world war, and this is a time where there was a lot of enthusiasm to think about machines and animals in the same language, and to try to really think of the brain as a machine in the first place. And then as a second step, sort of try to imitate that machine.
Rodolphe 00:15:45 But in fact, this, this moment in history was very short lift because I think it was, uh, stopped quite abruptly by Shannon’s theory, which is 1948. So it’s, it’s really, we had already talking about a few years and Shannon, I think was very disruptive in saying we need an information theory and that theory needs to be discreet. And then a few years later as we had the digital computer, and from that time on, we have seen, um, an increasing split between what we call analog technology and digital technology. And perhaps today, we are sort of very, um, climax of that, of that split in the sense that, you know, uh, students enter engineering, thinking that analog is old and obsolete and, and useless, and that digital is, uh, is important and is, uh, cool and, and is the future. And, and my view is that, you know, the digital, uh, page is just a moment, another moment in history.
Rodolphe 00:16:56 And that’s very soon, we are gonna go back to something which is much more mixed, but I think we see that happening already. Um, and I, I’m quite fascinated by this sort of return to, um, the mixture of the analog and the digital, because I think I read it as a return to, to cybernetics age, which was sort of ahead of its time. But I think now we can really make another try because we have 60 years of developments, um, in neurosciences. So we understand the brain much better than in 1948. And also of course we have, um, 60 years of developments of digital technology, enough computation and technology.
Paul 00:17:39 How did you come to the, so did you look at a neural signal? I, I was gonna make a joke and say, well, of course the brain is digital because it communicates in spikes. Right. But then of course, when you look at the activity of a neuron, there’s a voltage signal, which is a continuous analog signal. And then these somewhat discrete spikes, which are really continuous, but as you say, you can count, did you, uh, one day see a neurophysiological signal and think, oh, that’s mixed feedback. That’s how I could solve that. How did, how did you, maybe you could describe the mixed feedback approach and then, uh, and then give the background.
Rodolphe 00:18:16 Okay. Yeah. I mean, I, I, I think I should, I should tell the anecdote because this is really a, a turning point in my, in my scientific life and, and, um, it it’s happens more or less accidentally. And I certainly, as you said, did not think of spikes as mixed feedback signals <laugh> before that. So in, in 2008, I, I, I helped, uh, neurophysiologist from my university to develop a computational model of a single neuron. He was studying the effect of specific calcium channels in dopamine neurons. So we shared a student for a year and we helped him developing the, the computation model. And then at the end of the, that project, the students came to me and said, could I start a PhD? And I told him, you know, I’m not interested in, um, computer simulation. Um, but of course this student was afraid of nothing.
Rodolphe 00:19:17 His name is <inaudible> and, uh, he’s a Mon biker. So he, he’s sort of crazy student who wants really challenges. And, um, and I told him, okay, the first thing that I would want you to do, if you want to do a PhD is to take this computational model and to simplify it to a certain order differentiate equation so that we can draw a face portrait <laugh>. And it tried to do that. And we knew already quite a bit of neurodynamics by the time. So we, we are trying to connect essentially what this computational model to the, the little neurodynamics that we knew, and it didn’t work. He tried and kept trying didn’t work, and we felt that there was something missing. And, and so eventually we had a, a visiting student who was a, sort of, was a math background and wanted to do neuroscience.
Rodolphe 00:20:13 And I told him, okay, here is a five dimensional model, a model with five differential equation. How do you reduce that model to two? And he came back to me a week later with a face portrait, what was obviously wrong in a sense that it was different from any phase portrait I had seen in neurodynamics. Hmm. And then we start, um, working the three of us on this case portrait and the rest is, is history, but very rapidly, I understood that the role of calcium channels in this neuron was a positive feedback, very much like the, so channels of the ING model and that it was just in a slower time scale. And that neurons were organized with this big feedback motif that you can pile up at whatever, how many scales you wish both in times and in, and it was sort of a flash that this provides a completely novel, um, way of attacking Multiscale, um, control and, and control across scales. And then it took many years to sort of connect that to, to history, to discover that in fact, mixed feedback was very well known in the cybernetics time. So in fact, the mixed feedback amplifier was the main object of study in the thirties and forties in be labs when people started using feedback to do long distance communication. But I, I had never been told that. So I had to start to rediscover that,
Paul 00:21:56 But is that because cybernetics is all negative feedback, right? That that’s the, the story is that cybernetics is all negative feedback. Is it not the story?
Rodolphe 00:22:07 That’s what we have remembered from cybernetics, but before, um, in fact, at the time, the only way to build, you know, um, switches was analog, and how do you build a switch in analog circuits by using positive feedback? In fact, historically positive feedback was discovered before negative feedback. So all the, the electronic secrets early in the 20th century were positive feedback secrets to build, you know, memory what we would call memories today. And negative feedback was discovered much later, and it was a complete revolution by black that to understand the negative feedback had a role and not just positive feedback we have, I think sort of forgot that early part of the history of feedback. And then we have remembered the role of negative feedback and from the invention of the digital computer, you know, positive feedback became obsolete and useless because we had a different way to sort of encode memory. And we were, we were only needing negative feedback in the digital age. And so that’s why we have this nowadays this, uh, vision that negative feedback is analog and, and feedback is either in existence or just digital.
Paul 00:23:30 And, and so you have modeled, uh, neural activity at the single neural level, um, as a mixed, positive and negative feedback signal. I wonder if it’s worth just describing the overarching principles of, of that model,
Rodolphe 00:23:49 Right? So because of course need or negative feedback nor positive feedback is something novel. Right. Um, and I think most people have a sort of a, a rough idea of what negative feedback means. We think of our, for thermostat, and we think of, you know, uh, creating a negative error to reduce the variations of our plans and to reduce sensitivity. So we think of our cruise control system for the car, and we can think of many examples of sort of negative feedback. And then I think we, especially bio, they are very familiar with the idea that positive feedback creates, you know, binary memory or digital. Um, um, but I think that what I discovered as fundamental in the organization of no one behavior is the fact that once you have both, you can just balance positive and negative feedback, and you can continuously sweep between those two.
Rodolphe 00:24:52 And in particular, there is a sort of a boundary between the world of negative feedback and positive feedback where you go from one behavior to the other. And that boundary is what we call in mathematics of singularity. And this is in fact, the place for ultra sensitivity for spiking and for thresholds. And I think it’s really a fundamental organizing principles of, um, neural secrets. The fact that neural regulators can continuously balance these two feedbacks. And so use the same device if you want to think of it in terms of a machine to either have a memory or to have a processor, and to have an fact mix of both, which I think is very cool. And it’s something that as engineers, we have lost in, uh, technological world, which is nowadays divided into on one hand a digital world. And on the other hand, an analog world,
Paul 00:26:00 The way that you describe it, it seems an, I don’t know if this is why in engineering, it maybe seems less appealing is because it seems like a, uh, a very delicate balance that needs to be walked, uh, so that you don’t between the positive and negative feedback control signals. Is that just difficult in engineering, uh, uh, applications? Is that why it might be a little less appealing?
Rodolphe 00:26:29 It’s a very important question question that you in, because I haven’t spoken about the most important feature of this mechanism, which is its robustness. See if you have one source of positive feedback and that source is sort of, um, think of it as being narrow range. So it, it, it provides positive feedback only in a very narrow range of amplitude and perhaps, um, temporal scale. And then it is sort of surrounded by another source, which is a source of negative feedback, which is much more broad range. And, and now you sort of superpose the two, and I don’t know whether it’s, it’s intuitive, that it’s like, you know, summing a negative function that is very narrow to a positive function. That is broad. The result will be the sort of convex function with a little dip, with a little bump. And the little bump is created by the positive feedback.
Rodolphe 00:27:35 This picture is extremely robust because even if you have a lot of uncertainty on both sources, as long as you have this principle of, you know, the, the, the, the positive feedback being sort of narrow range, you will always have this spiking behavior existing. You cannot control exactly where and when, but it’s gonna always be there. And that’s, I think, fundamental to, um, design of bio system, because it’s a, it’s a print, it’s a design principle under uncertainty. And I think that’s the, the most important meeting points between the brain or even biology, more general and control. I think of control as a discipline under uncertainty,
Paul 00:28:29 Since you mentioned robust and started making the connection with biology, I suppose it might be the right time to bring up Eve martyr’s name and how you, um, I, I know that you’re largely inspired by her work on the Stamato gastric Gangan of the lobster and, and crabs. And, and maybe you can just kind of bring that in and talk about, did you already have these principles that we’ve been discussing in mind, and then you saw her work, and then that was another, uh, aha moment, or how did you come to be interested and appreciate the STG and, and her work in that system? Right? So I’ve had her on the podcast and, and, you know, famously that system is robust and can operate under different temperature ranges, and still under different regimes, come out with a functioning central pattern generator oscillatory system that can still function in the crab. And likewise, uh, that same structure can give rise to different functions depending on neuromodulators it’s being bathed, then, et cetera.
Rodolphe 00:29:31 Right. So very soon after this, um, I think though, that I’ve been, um, telling you about earlier, we, you know, we, we said, oh, we feel that this is very important. We should, you know, go to neuroscience meeting and talk about it. So we sent an abstract to, um, the neuroscience meeting and we went there, you know, the biggest conference in Conard is about 3000 people. And then I, there I am at San Diego, 30,000 people. Um, where do you start? <laugh> yeah. And then, um, one, uh, at the time S who then became my colleague, Tim Mori came to our pro and he was, he had been working in, in Eves modules labs for, um, three or four years at the time. And he told us, this is what we need. And that was the start of, of, uh, discovering eat mother’s work. And very rapidly. I found that I, I had found my home in neuroscience that to me, neuromodulation,
Paul 00:30:40 Oh, let me interrupt
Rodolphe 00:30:41 You. Is the systems of the brain. Yes.
Paul 00:30:45 Ah, okay. Well, I’m just curious at that, you know, so you’re talking about SFN, which is the society for, for neuro society for neuroscience, the largest neuroscience conference. How well, what year was that approximately and how well trafficked was your poster?
Rodolphe 00:31:00 <laugh> <laugh> I think it was 2012, if I’m not wrong. And then we went a couple of times. Okay. Maybe two 13. And, and, um, and then I, you know, I visited E mother’s lab in spend a year. Then, then eventually Tim leery took a faculty position in Cambridge. So since then we have sort of become much closer to each other. And, and, and I think that’s, yeah, if mother’s work is, is really, I find many similarities between also the position of neuromodulation in the world of neuroscience and the position of control in the world of engineering. You know, I think her work is nowadays recognized as sort of groundbreaking and extremely fundamentally, but I also know that for many, many years, she worked pretty much in isolation. Um, being told that she was working on something obsolete and from the past.
Paul 00:32:01 So I, I I’ve read this, excuse me. I read this quote from you, since you said that about Eve, I don’t remember the source, but you wrote, if you wish to contribute original work, be prepared to face loneliness <laugh> but never stop asking and listening. So
Rodolphe 00:32:18 That’s my experience in, in, um, over the last 10 years. Um, and that’s why I was saying that it has been a, sort of a turning point in my career because before that, you know, um, when I submitted the paper, I knew it was always going to be accepted. Uh, I ex I mean, I, it was, it was more like a business <laugh> work, and then starting in 2009, I’ve got, um, papers and papers and papers rejected, and has really been a very hard, uh, journey. Not just, I wouldn’t say it that much for myself, because I was sort of established by then, but I’m thinking more about my PhD students and postdocs who were really in the wild. And, and, um, and I think this is inevitable. Um, it was a, a long journey, I think mostly. And I, you know, when I look back at those reviews and those, uh, hard times, um, I, I don’t blame anyone.
Rodolphe 00:33:24 I think if, if there is anyone to blame, it was us. Um, we were using a language. I mean, we are not communicating. We were essentially expressing things in our own language, but not in language, that’s allow communication. And we wanted to reach out in communities and we didn’t really speak the language of those communities. And so that has been sort of the reason why it took a long time. I would say that this, uh, difficult time ended up with, um, being invited at tele, right. And, and discovering neuromorphic engineering in 2018, I had never heard of caval meat until 2018. And going to that conference was, um, sort of a shock because I suddenly felt like all the work we had been doing for the last, for the 10 proceeding years was essentially Aner to the question raised there. And at the same time, it felt like going back home suddenly I was speaking to engineers, I was speaking to people who understood my language. And, and since then things have been easy, I would say,
Paul 00:34:40 No more loneliness. What, what, what happened? Is that the, is that when you started modeling the, um, the crab stomach, systematic gastric Gangan system, or was that a couple years later? Because I know you, so we’ll go down, we’ll talk more about neuro neuromorphic in that, in your interest, in, in those, and, and you have been for a few years, at least modeling, um, parts of the systematic gastric ganglia, was that the ch was there a challenge in that conference? Or how did that begin?
Rodolphe 00:35:13 No, that, well, that began as soon as we, as we met, uh, Eve group and, and that we were sort of given with a, a relatively small network, but we were told that this network was nevertheless very rich. And so <inaudible> tells that, you know, the somato, the SDG has about 13 neurons, but in fact, you can sort of have a cartoon model with about five neurons and that’s, you know, the size of secrets that we are ready to jump in, uh, coming out of our cellular work. Okay. So the SDG was very instrumental to help us understanding how you move from the cellular level to the, a separate level, and whether there is any change to do that in a sort of a bottom up fashion. Cause I think many people think that, you know, cellular work is sort of useless if you are interested in brain functions.
Rodolphe 00:36:08 Yeah. And so there is this big split of communities. And so of course it’s very questionable whether doing work at the cellular level has any chance to have an impact at a separate or, or functional level. But that was our interest from the start. Um, because my interest from the very start was to discover a principle of what I call control across scales, which I think is the big, big, open question nowadays, see, uh, even in neuroscience, but not in, only in neuroscience, but certainly neuroscience. If you go to the SN meeting, you see these very layered, uh, conference where, you know, some communities study the cellular level, some communities study the animal level and then all the, but, but the grand challenge of course, seems to be that how you, how you go from one level to the other and how do, and whether it’s hard to imagine that, uh, a community would have worked for, or more than 70 years on the cellular level, if it was totally, um, useless, but at the same time, it’s still, I think very much an open question on how the cellular level sort of governs the organization of higher level.
Rodolphe 00:37:31 And, and as a control engineer, this is a very central question for me.
Paul 00:37:37 So let’s talk about across scales for a moment because it’s, it’s fairly intuitive to think at a single neuron or, you know, thinking about the molecules, the receptors, and that neurons need for homeostasis and to remain within a range to stay alive and then it, and then you can kind of, it’s still intuitive to build it up and think, well, these are now communicating with each other, but then you start getting into higher, let’s say cognitive functions, right. It seems harder to connect that in a control story, where is the, uh, that internal reference signal that, um, is being controlled for. Right. Does that make sense that, uh, do you think of higher cognitive functions as control also?
Rodolphe 00:38:22 Definitely, but probably I think that way, because I don’t know much about what you mean by higher cognitive level <laugh> and so I’m so
Paul 00:38:32 No one does though, but no one does,
Rodolphe 00:38:34 <laugh>, I’m sort of very naive in, in, in having in lacking the knowledge of, of neuroscience and, and, um, but at the same time, um, I’ve been continuously amazed by how much, you know, I, we, in the first year we spent a lot of time, um, both of our time, I would say explaining how a single cell can sort of be modulate between a spiking state and a bursting state. And of course it was very difficult to make any point out of that because everyone was telling us, I mean, this has been studied 40 years ago. What, what are you, what, what you wanna do?
Rodolphe 00:39:21 And yeah, I will not enter too, too, too many details because it would, it would get technical. But then, um, already, if you go from the single cell to, um, an S STG, so an SDG is essentially an example of what neuroscientists called center pattern generator. So you, you must generate, um, rates and SDG generates a slow rate with essentially two neurons and another faster rate with another two neurons. But how you generate a rate out of a single cell has been studied for a century. It’s the so-called health center oscillator model. You just attach two cells to each other with inhibitory coupling, and you create a sort of antiphase clock in a very natural way. Now discovering the fact that cellular bursting was the <inaudible> mechanism of that very simple written was already, I think, a big, big step forward for us because it has always been a difficulty for me when I studied the literature on central pattern generators, to know whether you should think of central pattern generators as autonomous clocks, or as, you know, separates that interact with their environments.
Rodolphe 00:40:49 And of course we know that it’s a mix of both, but many people have a view of either inhibitor networks as being sort of very autonomous and, and, um, not requiring any signal from the environment. And then you see other people who sort of reject that idea saying, no, no, no. Um, that doesn’t make sense. Any cell must interact with the environment, which is true. You see this balance in fact is very much, again, a manifestation of the balance between positive feedback and negative feedback already at a higher scale, there is not much cognition perhaps going on in a CPG, but it’s already, uh, a higher level than a single cell level fact. We see nowadays that in fact, um, CPGs are all over the brain in the sense that inhibitor networks play a role to generate rates in many, many areas of the brain.
Rodolphe 00:41:46 And then if you go even higher up, you find that this, um, transit of a single cell between spiking and bursting, for instance, has been described to play an important role in sleep and in sort of disconnecting Themus, um, from sensory mode and sort of creating a sort of a gate between the cortex and the sensory, um, layers. And again, you find that you see the same balance between inhibition and excitation as something that, you know, can regulate how much you want the behavior to be endogenous and sort of disconnected from the, um, the network with the separates to be connected to the world. And nowadays there is still a lot of, it’s a very active research area and neuroscience, and in neuromodulation and people talk about brain states and brain states are sort of cover all the, the levels all the way from, you know, the whole brain, which is whether you sleep or whether you are awake to very small circuits that can be sort of temporarily disconnected or connected.
Rodolphe 00:43:09 Now, this is just an example and it, it, it’s one of the few areas that have studied a little bit, I would say neuroscience, but this gives me hope that, you know, this bottom up approach can go a long way and I don’t know how much it can cover cognitive areas, but certainly it is enough if you ask the engineering question of using an based camera to create something like attentive vision, how can you make a camera asleep or awake? I think that, you know, already, this is sort of a very engineering question, which by the way, is, it’s a very important question nowadays in engineering. And I think you can directly connect that question to what I’ve been talking about, and that have been studied in neuroscience for now 50 or 60 years.
Paul 00:44:05 So how far have you taken the, taken the neuromorphic, uh, approach to the STG?
Rodolphe 00:44:13 Okay. So again, I mean, we, we started neuromorphic quite recently and, and I, I have very little expertise in separate design and, and, um, but I found in Cambridge, one of two students sort of brave enough to design analog. S you have to imagine that nowadays for students, it, it requires a lot of courage to <laugh> dig into an analog se design because <laugh>, why would you study analog anyway? Uh, we have built, uh, and students of mine have built first, what I would say the first, um, neuromod level, uh, neuromorphic secrets. So very elementary secrets that can be controlled between sparking and bursting between, you know, UN enough discreet states it’s, it’s, uh, tremendously encouraging to see that it works and that to see that it works in the presence of uncertainty. So I think my, my next five years, or even 10 years will entirely be in that area because now we really are at the stage where we can try to design and, and, and build secrets of growing size. And, and the SDG will be one example of them, uh, where we are gonna start assembling those motifs to create separates of higher dimension. And, and, and of course, um, electronics is, is a very good area to scale up because you can very easily build chips with several thousands of urines of the type of urine that we have built, um, just on the PC board with two or three.
Paul 00:45:54 So as you build up, and even as you’re modeling, you know, even your mixed Fe feedback signal, the way that you model a, a neuron’s membrane, right. To, um, simulate the signals coming from a neuron. So a real neuron has dozens of receptors, ion channels, et cetera. And you are, you have to abstract away from that. And like you were saying two earlier, if you can get <laugh> two equations, um, right. Does it, do you feel, as you scale up that you’re gonna continue to need to abstract things out, or do you have any, uh, plans or desire to build in finer detail of those ionic currents? Or is that just, are we at the right level of abstraction in your opinion to, but because it’s, it functions as a mixed signal feedback, and then you can change the motifs with the neuromodulation principles,
Rodolphe 00:46:47 Right. Um, okay. Maybe there are two aspects, uh, on, on the question, one aspect is if we look at the complexity of these neural modulators and, and receptors and iron channels, and to what extent do we want to replicate that inner circuit? And, and mm-hmm, <affirmative> my view is that there is a huge amount of redundancy in those circuits, um, which makes them incredibly adaptive and, and, you know, reconfigurable and, and evolutionary and so on. And of course, that’s perhaps the, the biggest challenge. If we want to build machines that have the same level of sort of evolutionary, uh, capabilities.
Paul 00:47:34 Well, they’re also continuously being turned over,
Rodolphe 00:47:37 Turned over in, in Watson. What do you mean by that?
Paul 00:47:40 They’re continuously, uh, moving on the membrane each ion channel, and then that ion channel dies a new synthesis, you know, so there’s, there’s continuous turnover in the system as well. So,
Rodolphe 00:47:52 So that has a, I think a pretty good translation in Cav meets, um, electronic elements that you use these, um, trans conductants transistors that are voltage gated. And, and you can really think of the voltage as a, as a way to modulate their maximal conductance, which is very much like what a newer modulator does by, you know, modulating the expression of a channel. Essentially, you, you create more conductants or less conductants. So there is quite a good match between those two principles. And so that I think we can imitate, but of course, um, at, in a much simpler way, right. And of course, um, the way I think of, uh, how you control spike neuron, which would be sort of the simplest control system, is that in principle, if you want to control a spike, you only need one conductance for the positive feedback and one conductance for the negative feedback.
Rodolphe 00:48:51 So you only need two parameters and you balance the two parameters, but in fact, the way a neuron does it, it use perhaps hundreds of parameters to control the positive feedback and another hundreds of parameters to why so much redundancy, because then you can think that, you know, there are many, many different ways you can modulate, let’s say the positive or the negative conductions. And, uh, you can sort of, uh, think of, uh, attaching, um, of each receptor as being another sensor for the neuron. And so a single neuron can sort of pay attention to a diversity of signals in the environment, but at the end of the day, the control task of all these channels and receptors has a certain level of simplicity is just has to balance these positively feedback, but just can do it in many, many different ways. That’s really something I learn from Eve work.
Paul 00:49:51 You were talking about redundancy, but then earlier, you know, we were talking about robustness and how that’s an important principle. Is it possible that, you know, these hundreds of different, uh, ion channels and ways, uh, that the neuron can behave is because it, this allows it to be both robust, but also be used for multiple different cognitive functions, multiple different purposes in different operating regimes.
Rodolphe 00:50:19 Yeah, that, that’s the way I think about it. And this is, I think, where the work of is so important, because I would say that if you think of the STG as a control system, the task of that control system is pretty simple. It just has to articulate tool rates with each other, but it is done with tons of neuromodulators and tons of receptors. And this is what is creating the complexity. So I, I, sometimes I say that I think of neural circuit as very simple control systems with very complex controller, which is almost the opposite of what we do in engineering, where, you know, we use always the same type of P I D controller, but then we put many, many, many of them and took on a very complex system. Um,
Paul 00:51:12 But at some point that gets wasteful, right?
Rodolphe 00:51:15 Well, wasteful. Um, we have to be careful when we use that word <laugh> um, because I mean, our experience of animals is that they’re not wasteful at all. Um, and so, but I think that I think of this redundancy as being accumulated over evolution. And so the way I think of building machines that sort of mimic those behaviors would that those machines, at least initially would be much, much simpler, but perhaps over time, you know, you add another fun phone, it’s a little bit like software, the way softwares are developed nowadays, you, you add, you keep adding functions. And at the end of the day, they, they look pretty wasteful <laugh> <laugh>
Paul 00:52:01 Okay. Rudolph, let me, I’m gonna play you this question from a, a listener because I wanna make sure I get it in before the end and it, it might be the right time to, to play it and might lead to other, uh, topics here. So here’s the question from Matt.
Speaker 5 00:52:16 Hi, professor SIPER, since you and your colleagues work on mixed feedback control has made progress on the issues of component variability and noise fragility. What do you see as the next big technical barrier standing in the way of mainstream adoption of neuromorphic methods, if any, or is it more a matter of getting the right people in industry interested? Thank you.
Rodolphe 00:52:41 That’s a tough question. <laugh>, uh, talking about the future. It’s always difficult. My experience is that technology as very often is way ahead of us. And when I say us, I mean researchers. So I think that the theory of neuromorphic design is lacking behind the technology of neuromorphic design, um, by a very significant margin. Nowadays, there is a huge interest from the industry for neuromorphic. Intel is building, uh, neuromorphic chips and, and, and then the oven based camera was commercialized just a few years ago, but it’s, I think it’s a complete revolution in the computer vision, um, industry and community, but the theory likes behind we have why, because what we have on the table, what we learn as a students is a sort of a double sets of tools. And you pick your digital tool or your analog tools from two different bags. And, and you do that at every level in every discipline.
Rodolphe 00:53:57 And at least my understanding of neuromorphic is that it’s precise leader, the mixture of the tool that’s that is fundamental to neuromorphic. And the truth is that we don’t have a theory for that. We, we don’t, we don’t know how to handle spikes. So we, some people handle spikes in a statistical way. Some people say that spikes are irrelevant. Some people say that each single spikes is hugely important, but I mean, this diversity of, uh, almost opinions, I would say <laugh> is just telling us that we don’t have a good theory to handle, um, spiking information technology at the moment.
Paul 00:54:40 What role do you think the modern successes of rate based modeled rate based deep learning models has played or continues to play in that divide? I mean, a lot of people, even in neuroscience, look at the success of these rate based models, sometimes that are just feed forward, you know, and D and really abstract away a lot, right. Going back from macock pits, you know, the binary neuron to now these rate based, um, approaches, and they can do so much, so, right. Like you just said, we don’t need spikes. They’re irrelevant. Um, you know, how do you view that in terms, you know, just personally, but also then in terms of, um, related to Matt’s question, getting people excited about neuromorphic?
Rodolphe 00:55:26 What, in my experience is that a lot of people are getting very excited about neuromorphic, um, these days. Um, but it was not suddenly not the case 10 years ago. And, and so, and that was the time where I was digging into spiking. And so suddenly asked myself that question very often. Uh, I remember reading, uh, FAUS paper, I think, for the spike. And <laugh> thinking that perhaps I was working on something completely irrelevant, but, um, <laugh>
Paul 00:55:57 Oh, yeah.
Rodolphe 00:55:58 But, um, you know, I think it’s phenomenal what has been achieved based on Malo pit’s ideas, you know, you use so little of what we know of the brain, and then you build 60 years of technology that is still developing. And, and I mean, who could have predicted what has happened in digital technology in the sixties? It, it, it’s just phenomenal at the same time. I think that an increasing number of people think that, you know, digital technology has very, very strong limitations and, and probably capital me was visionary in sort of foreseeing that story three years before the others and perhaps it’s, and probably it’s because he was at the very forefront of computer technology. And, and so he, I think he became, because he was an expert in, in, in C technology. He, I think he perceived before others that this was hugely inefficient.
Rodolphe 00:57:07 Um, and sooner or later we would be hit by this inefficiency. And that’s what we see now suddenly, you know, the carbon footprint of, of digital makes news and, and, and he’s really becoming a huge problem. And that’s why there is such a, an interest in the industry. And, and so I’m, I’m not worried about, you know, the potential of, um, neuromorphic. Um, I’m more worried about, um, the pace of developments of the theory. It’s very slow <laugh>, um, partly because, you know, most of people work. I mean, this is a bandwagon effect. So nowadays it’s, you know, far easier to get a job and just to develop another deep neural network. Um, and, and I don’t think there is much you can do about this phenomenon. It’s, it’s, it’s a sociological phenomenon that will always exist. Um,
Paul 00:58:06 But if we let’s say we could siphon from a black hole, some energy portal. And so we had nearly infinite power consumption ability, and then didn’t have to worry about wasting the power, right. Would it matter? Could we just scale up with what we have, and, and of course we could give the energy back to the black hole, so it wouldn’t increase, uh, global warming anymore, et cetera. It wouldn’t wouldn’t hurt the carbon footprint. No, but, uh, then would we still be building neuromorphic and, and worried about, uh, the power consumption? Is there something else besides power efficiency, of course, that neuromorphic computing can add or, okay. What’s that
Rodolphe 00:58:47 So very good point. I’m, I’m, I’m talking about energy just because it’s in the news currently. And, and that’s the argument of companies like Intel, but efficiency has never been my interest in, in, um, but I very long been fascinated by, um, the power of, uh, animal vision in selecting the right sort of information, um, with a speed and a resolution that will never be achieved by any, um, digital camera. So I think that when we, you know, when you study for five minutes, the difference between a digital camera and, uh, based camera, you immediately realize that there is nothing more stupid than a digital camera. And that’s, um, storing, uh, streams of, uh, millions of pixels it for nothing most of the time. And then piling up this in, in huge servers is adding century from now. We will laugh about that in the same way as we laugh about, you know, the way people were using, um, um, mechanical power at the end of the 19th century.
Rodolphe 01:00:11 And, and, um, so there is no question that the way we use digital technology today will sound extremely obsolete. Um, maybe already 20 years from now, and not just for energy, um, efficiency, it it’s, it’s much more than that. Think about robots, uh, soft robots, um, that needs to grasp anything with a sort of sense of touch. We are nowhere in designing such robots nowhere, and I think we will never get anywhere as long as we stick to digital technology. Likewise, for acoustic sensing, likewise for digital visual sensing, all the sensing that we see in the animal world,
Paul 01:01:02 How long will it take for the theory to catch up?
Rodolphe 01:01:05 I think it’s speeding up. Um, you know, I, I, I see that, I mean, I can at least speak from my work that, um, if I look over the 10, the past 10 years, I think the pace has been extremely slow because it has been a lot of deconstruction and a lot of reconstruction and a lot, but I’m quite optimistic that, that, that the developments will be quite fast in, in the next 10 years. And I think that many people express the same in different areas in there is a sense, I think growing up that, um, we are about, um, sort of a turning point in the way we understand the brain in the sense of designing machines. So we are very close to, you know, designing machines in a very novel way. And I think that perhaps the oven based camera is the best current, um, example of that.
Paul 01:02:03 So from your engineering perspective, I, I, I know you have your fingers dipped in a lot of different areas. Uh, you have a view on neuroscience, and I know earlier you said that you don’t understand enough about enough neuroscience to speak about higher cognitive functions, et cetera. But, uh, and I know that you’re aware of the modern, deep learning successes, and there’s this large push in neuroscience these days to use those deep learning models to help us understand and control. Actually, if you look at Jim DeCarlo and Dan Yemen’s labs, um, I know that you’re, you, you think of control as understanding, um, to use those deep learning models as a window into how our brains function does this look, um, right to you? Does this look silly to you? Does this, do you think like, uh, neuromorphic in a hundred years, we’re gonna look back and think that the whole deep learning quote unquote revolution that is continuing to scale, um, that’s by itself, sorry. Those are I’m conflating two things using deep learning to understand brains and deep learning itself, continuing to scale. Um, do anyway, let’s stick with the neuroscience deep learning comparison. Do, do you think that’s a useful way to model what’s happening in brains or do we need to use things like, you know, the mixed feedback signal, which is a, seems like an entirely different beast?
Rodolphe 01:03:26 I’m not sure. I mean, using deep learning to advance neuroscience, doesn’t speak to me. Um, but using deep learning for whatever, um, scientific question doesn’t speak to me, um, that doesn’t mean that it’s silly. <laugh>, um, it, it just doesn’t speak to me. And, and I think we have to be very careful about the time scales of these hypes, you know, deep learning started, let’s say 2012, so this is just 10 years ago and I might be wrong, but I think that deep learning will be very short list. Um, I, I see nothing fundamental in deep learning, and I think that many researchers in machine learning would agree with that don’t take me wrong. I, it, it is creating, uh, you know, a huge advance in the industry and in the technology.
Rodolphe 01:04:23 So it’s, it is a technological advance, perhaps a technological revolution. I don’t think so, but it’s certainly not, um, a revolution in science and, and, um, I think that it is very tempting, especially if you don’t work in machine learning, but if you use machine learning, if you use deep ING, it’s very tempting to think of it as a sort of a black box that is doing miracles, but that, that one class. And in fact, we have seen that in previous, uh, sort of winters and, and summers of machine learning, you know, whenever you create big expectations, that deception comes next. And so I think we are very close to that stage.
Paul 01:05:05 Where, where do you see us in neuroscience right now? <laugh> I know it’s an unfair question.
Rodolphe 01:05:12 You know, I, I, I know too little about neuroscience to, to say anything, uh, relevant about neuroscience. Um, I, I see neuroscience as the most important, uh, scientific field of, of in science today. Um, and so no question that this is gonna be the, the big science of the 21st century. I think that progress has been very slow initially, and that it’s speeding up. Um, and what can I say beyond that maybe as an anecdote? I, I, I could mention, um, a book that I read last summer, um, by max, uh, SOS called hidden spring. So this is a book about consciousness.
Paul 01:06:05 Oh, yeah. Uh, mark, mark soms, mark
Rodolphe 01:06:07 SOS. So this is a book about consciousness. Yeah. And I usually, when I read articles, uh, about consciousness, I stop after two paragraphs. Um, <laugh> but I read this book, um, in and out. And what I found really fascinating in that book is that I understood it all with the very little background that I have in neuroscience. And I really read it as a control textbook to me maximum is describing the brain as a controlled system. And now we are talking about, you know, the, perhaps the biggest or, or most grand problem of neuroscience. And this, again, this speaks to me, the fact that I think there is a convergence of, of, and so what I especially liked in that book is that I think mark sums at the end of the day, he’s demystifying the question of consciousness and, um, and he’s using his background in neurology, his background as a psychiatrist, uh, he’s making a sh a huge number of reference to, and yeah, I think this is a very important time that the fact that now we start people feel, you know, a load to go back to the roots, a load to contemplate the developments of neuroscience and go back to the early questions.
Rodolphe 01:07:38 I think it’s a very positive sign that there has been a huge, um, development of the field and that, um, the field is entering an area where perhaps progress will be less disorganized and, and less Al and, and more, uh, cohesive.
Paul 01:07:57 I just recently had on the podcast. And, um, her, you probably don’t know her name, but she is an ex physicist. And I, I was making a connection between her work and yours, because she works on these, um, models they’re called linear threshold networks, and they are essentially graph models where each node is an excitatory unit. And then there’s this background inhibition. Yeah. That follows kind of two rules. Yeah. That’s just bathing the whole, uh, network. And she’s using that to, uh, be able to make rules they’re, they’re mathematically tractable models so that, uh, she can look at the, um, the structure of the model and then predict the dynamics that are coming out of it. And from building up these models, you know, into five, 10 units, et cetera, then she can make these, um, dynamic attracts that, that create sequences of, um, numbers and et cetera, uh, see like she models the horse galloping and sequences and all sorts of different dynamical attracts.
Paul 01:08:59 And the goal is to be able to say what kind of, uh, dynamical structures will result in a, uh, network structure. That’s sort of beside the point. And so one question is I’m, I’m still trying to connect that with the mixed feedback signal approach, and there there’s some, some fruitful, uh, project there to be had. But the other that, the reason I thought of her just now is because thinking about moving forward in neuroscience, as you say, it was slow progress, but it seems more, uh, concerted. I re I mentioned that she is, has a physics background because, uh, the physicists seems to still be coming in droves into neuroscience, but the, the background of neuroscience is a bunch of engineers also. And I don’t know if that has, uh, just been a steady March. Do you think neuroscience needs more, engineers needs more physicists needs more molecular biologists. We all know that’s not true. <laugh> what, what, what do you think, is it lacking an engineering approach? Is the engineering approach going to help advance the needle?
Rodolphe 01:10:08 I, I think neuroscience needs all backgrounds and, um, certainly needs physicists as well as it needs engineers. Um, I cannot resist telling another anecdote that it’s back, I think 2009 or very early in my journey, neuroscience, I spoke to a computation and neuroscientist and he told me, you know, there is only one good background if you wanna do computation and neuroscience it’s physics.
Paul 01:10:35 Uh, oh.
Rodolphe 01:10:36 So it was sort of kicking me out of the room. Um, yeah. And, uh, I I’ve seen a lot of,
Paul 01:10:41 That’s a very physicist thing to say by the way.
Rodolphe 01:10:44 Yeah. He had a pH, he had a background in physics indeed. Uh <laugh>
Paul 01:10:49 Of course.
Rodolphe 01:10:49 Yeah. But, um, I’ve seen a lot of that I would say. And, and I would say that still nowadays in computation and neuro scientists probably dominated by physicist and there is nothing wrong about it. Um, except perhaps that it has created a vision of the brain that is very similar to, you know, we, we spoke about, um, the vision of James gly of, of celestial mechanics. So still sometimes amazed to see some people thinking of the brain as a sort of gigantic, um, universe where things rotates like planets and, and, uh, and that creates whatever it creates. Um, perhaps engineers are useful to compliment that view with a, a view that is much, much more close to. And, and, and I think that my competitive advantage as an engineer in that story, I think is the design element. So the word that you just mentioned sounds extremely interesting and in fact resonates with, I think some, some of the things that we do indeed, but my first question to that physicist would be, how would you build a circuit that does that?
Rodolphe 01:12:13 And I think that, so this is a second question and it is the, what we call in control. The realization question, you know, you, you build an abstract, um, model of something. And then you ask a question, how do I translate this mathematical model or this simulation model into a machine, into a physical machine. And I find that this realization question is, is not very often present in neuroscience. And the result of that is a gap. You see, I know that someone like <inaudible> has very little communication with computation on neuroscientist, and that I have perceived that gap very often between computation and neuroscience and experimental neuroscience. So, yes, I think that we see an and I think we need more engineers in neuroscience. And I think that this will happen because of neuromorphic engineering on one hand, but also because of, um, medical neuroscience, inevitably, we will see a move from science to medicine.
Rodolphe 01:13:23 And whenever we see in biology, a move from science to medicine, we see the analog move from physics to engineering because medical doctors, you know, at the end of the day, they don’t really ask whether they understand the brain. They just want to heal a dysfunctioning, uh, brain. And in that sense, they’re very close to engineers who want to build things. And, and I think that a lot of progress in neuroscience I’m sure will come from brain machine interfaces will come from, um, deep brain stimulation, uh, to, to cure diseases. And, and so I, I’m quite optimistic about the progress of neuroscience being driven by medicine, more and more, rather than buying science only.
Paul 01:14:14 I don’t wanna get you in trouble, but what about philosophy? Is there a role for philosophy?
Rodolphe 01:14:20 Well, of course you it’s like asking your physicist if he thinks that physics is important. Um, bit of background in philosophy, and I think that’s, um, philosophers do have a role, um, in neuroscience and in fact, a very important one, maybe not so much to develop theories of consciousness, but, um, to bring more epistemology in neuroscience. I think that, um, there is a lot of confusion, um, in neuroscience between the sub communities and that sort of developed their own languages and then had difficulties to speak to each other. And I think that’s the sign of a field that is developing, but, but, but at some point, if you want to put a little bit of order <laugh> and structure in, in that mess, I think philosophy is very useful. And, and also another, uh, I think value of philosophy is to move people away from their methodologies, which in neuroscience could be also, um, you know, experimental devices and move them back to the questions and in particular, the sort of the core questions.
Rodolphe 01:15:38 Um, and so I think that philosophers have always had that role in science. And it is something that is suddenly, suddenly a little bit these days that sometimes you feel like philosophy is no longer regarded as a science and that there is this, um, we, we tend to think that science is only about technology and that, um, humanities are not really in the same, um, playground. I, I disagree very strongly with that view. I think that, um, um, science is a human adventure and that, um, humanities have a, as an important role as, as, um, as technical science and, and, um, and certainly that’s the case for the brain. And, you know, we, haven’t been talking about all the ethical questions and, and yeah, I think there is a place for philosophy in many, many areas of neuroscience.
Paul 01:16:45 All right, I’ve kept you long enough, but I, I wanna end on this perhaps, unless you have other things that you would like to discuss, but, you know, we, we’ve talked about sort of the history of feedback and how it began with positive and mixed, and then negative feedback came to dominate with cybernetics and then cybernetics went away. So there are these kind of fads, um, and we’ve talked about how you think that the deep learning of these days is gonna be fairly short lived. And we discussed how, you know, I read that quote, I’m gonna read it again, the quote from you, if you wish to contribute original work, be prepared to face loneliness. But the, the question is, how do you know you are working on the right thing? Is it just that you are submitting manuscripts and they’re getting rejected <laugh> or yeah. And, and then how long can you continue on that path? Because they’re not all the right paths and they’re not all the right subject areas to study to answer certain questions, right. Engineering, isn’t applicable to certain questions. So how do you know that you’re doing it the right way? Like, yeah, I think a lot of people struggle with this. I struggled with this, with this question throughout my career.
Rodolphe 01:17:55 Definitely. Um, I think in, in that same interview that you are quoting, I, I advise to always think of, um, or profession as a privilege. I feel very privileged, you know, to, um, be paid, to do what I’m doing a little bit like artists who are paid for what they are doing. Um, I think it’s, it’s a, it’s a very luxury, um, position, but of course, this, this sort of role of, of research and academia is very much challenge nowadays. Um, because, you know, we like to think more and more of academic people as contributing to, you know, the benefit of society and they have to do their job <laugh>. And, um, and so there is this sort of a business, like model of academia that, um, many people resent and especially, I would say, um, young people, um, I’ve, I’ve often had conversation with postdocs telling me, you know, scientists no longer what used to be. I would never have the freedom that you have had I’m forced to do on this and this. And I have no choice.
Rodolphe 01:19:07 Um, I think there is always a balance. Um, and I think of the academic profession as a, sort of a, something in between a, a profession and, and arts, but sometimes I feel a little bit like of an part of my job is the job of an artist, um, not my entire job. Um, so an, an artist is a allow to do wrong things. An artist is a allow to, to work on things that, you know, will never be of interest to anyone. And I think that academics should be a load to work on things that will never have any impact. Um, but of course, you know, if you manage a department, you and you hire people, you, you also want to make sure that those people bring some money. And, and so it, we are not just living in a purely artistic world. Um, and it is a diff a balance that is very difficult to maintain.
Rodolphe 01:20:07 Um, I would also say that I was describing earlier that, you know, I think that the first part of my career has been closer to a profession. And the second part of my career has been closer to, um, an artistic life. And certainly moving to Cambridge has given me a freedom that I, I don’t know how many places still, um, give that freedom, but maybe this is something that’s one can develop over time. And I always tell my postdocs, and, you know, perhaps there is a way there is a path to what you wanna do, and perhaps you have to sort of accept that. Um, you will not immediately be given the, the freedom of perhaps working on something totally useless <laugh> um, but perhaps you can always try to push things in that direction. And I think that in my case, it’s certainly has been a very slow, uh, journey.
Rodolphe 01:21:10 And, and, and of course I’ve been lucky in, in, in many ways, because you also depend on opportunities and, and on, on the, so you don’t control everything in, in, but I think you can always try to push the path in the direction of things in which you believe, because at the end of the day, the only thing that makes it possible for you to work in isolation is to believe in what you’re working on. And, and we have to be careful with that sort of faith <laugh>, um, it can be very dangerous. So I think it’s good to believe in something, but it’s very important that to keep interacting with others so that they can always have a chance to tell you that you are fool
Paul 01:21:59 <laugh>. Do you, do you agree with like the postdocs you were talking about that the, perhaps the on average, the struggle, uh, along that path is of greater proportion than it used to be?
Rodolphe 01:22:14 I don’t know. It’s very hard for me to compare, um, the current situation to the situation 40 years ago. I think it was hard at the time. It is hard today. Maybe it has always been hard. Maybe what, perhaps we, yeah. Perhaps there is a tendency to think that academia should be a, a mass thing. I mean, that, that everyone should have the chance to become an academic. And yeah. Um, I don’t know. I, I, I completely acknowledge that it is very difficult nowadays to, um, navigate between the business demands and, and the artistic, um, hopes. But I think that this is true of every profession at the end of the day. This is true of every life. And we have to, um, we don’t have to just complain that things were easier before we have to. I think there is always a path for, and, and, um, and that should be our focus.
Paul 01:23:23 Well, Rool, I’m happy for you that your path has become slightly clearer these days and that you’ve, uh, walked through the fire, so to speak. And I appreciate your time with me today. Thanks.
Rodolphe 01:23:34 Thank you so much. It was, um, really a very nice opportunity to have a chance to talk about what is close to my heart. Thank you.