Brain Inspired
Brain Inspired
BI 186 Mazviita Chirimuuta: The Brain Abstracted
Loading
/

Support the show to get full episodes, full archive, and join the Discord community.

Mazviita Chirimuuta is a philosopher at the University of Edinburgh. Today we discuss topics from her new book, The Brain Abstracted: Simplification in the History and Philosophy of Neuroscience.

She largely argues that when we try to understand something complex, like the brain, using models, and math, and analogies, for example – we should keep in mind these are all ways of simplifying and abstracting away details to give us something we actually can understand. And, when we do science, every tool we use and perspective we bring, every way we try to attack a problem, these are all both necessary to do the science and limit the interpretation we can claim from our results. She does all this and more by exploring many topics in neuroscience and philosophy throughout the book, many of which we discuss today.

0:00 – Intro
5:28 – Neuroscience to philosophy
13:39 – Big themes of the book
27:44 – Simplifying by mathematics
32:19 – Simplifying by reduction
42:55 – Simplification by analogy
46:33 – Technology precedes science
55:04 – Theory, technology, and understanding
58:04 – Cross-disciplinary progress
58:45 – Complex vs. simple(r) systems
1:08:07 – Is science bound to study stability?
1:13:20 – 4E for philosophy but not neuroscience?
1:28:50 – ANNs as models
1:38:38 – Study of mind

Transcript

So this really sort of gets to the heart of why I’m saying that there needs to be a warning about how philosophers of mind interpret the results of neuroscience.

The model that you have of perception is computational, in large part because that’s a convenient, simplifying strategy. It doesn’t mean that you have found an inherently computational system within the brain.

Nature by itself is unconstrained, very often machines, if you like, the stepping stone needed to get from unconstrained nature to model systems in the laboratory that are the starting point for new theoretical developments.

[00:00:54] Paul: This is brain inspired. Hey, everyone, I’m Paul, and my guest today is Masrita Chiramuta. Mazrita has been on a few times before, but today she comes on with a new book that she has written, the brain abstracted simplification in the history and philosophy of neuroscience. And the book largely argues that when we try to understand something complex, like the brain, something very complicated, and we use models and math and analogies, for example, to do so, that we should keep in mind that these are all ways of simplifying and abstracting away details to give us something that we actually can understand.

And when we do science, every tool that we use, every perspective that we bring, every way that we try to attack a problem to solve it, these are all both necessary to do the science, and they limit the interpretations that we can claim from our results. So she does all of this and more in the book by visiting many topics in neuroscience and in philosophy, many of which we discuss today. And luckily for you, the book, even though. So I have a hard copy, and I think I say in the episode that it’s actually a really beautiful looking book, and I have enjoyed the hard copy. But luckily for you, the book is available for free digitally through MIT press. So I link to that in the show notes and more information and previous episodes of Masvitas on Brain inspired in the show notes at Braininspired Co. Podcast 186. And of course, we don’t discuss everything that’s in the book, but we do touch on a few more topics in the full version of this episode, which you can get when you support brain inspired through Patreon. So if you’re interested in full versions and joining a discord group, et cetera, go to Braininspired co to learn how to support it on Patreon. Thank you to all my Patreon supporters. Okay, here’s Mazrita.

Here’s the copy of the book, by the way. I got a copy, yeah. Is it mirrored? No, you can read that, right?

[00:03:03] Mazviita: Yeah, I can read that.

[00:03:06] Paul: It’s really a beautiful book. And you had emailed me about, you said, thanks for reading my extremely long book, and I didn’t realize it was so long because I was reading a digital copy. Right. But it’s not that long, and it didn’t feel long to me because I enjoyed it so much. But what I was going to say is it’s actually a quite beautiful looking book. Are you happy with the way it turned out?

[00:03:26] Mazviita: Yeah, I am.

[00:03:27] Paul: What about just the m? What’s that about?

[00:03:31] Mazviita: Because my name’s too long. So my previous book. Yeah.

[00:03:35] Paul: Is that going to be your moniker going forward?

[00:03:38] Mazviita: Yeah. So that was how it was for my previous book and most journal articles.

[00:03:44] Paul: Okay, well, let me start by reading perhaps my favorite. It’s a short quote, but perhaps my favorite quote from the book. I wonder if you could guess what this is. Or I’m sure you could guess what it has to do with.

[00:03:59] Mazviita: Okay.

[00:04:01] Paul: What we may conclude is that, as a matter of research interest, neuroscientists have not given up on trying to understand the brain.

I think that comes toward the end of the book. It’s so defeatist. It’s such a slap in the face of neuroscientists.

[00:04:17] Mazviita: No, not given up. Okay. You got to take that in context, which is going back to that question of prediction versus understanding. So with the power of artificial neural network modeling applied to brain data, can you just get away with just relying on predictive power and not focusing so much on understanding? And I’m just pointing out that that hasn’t happened. There’s still neuroscientists that are.

[00:04:44] Paul: Yeah. So the context is understanding defined as you define it in the book. Not our common sense notion of understanding, but read out of context. It’s lovely.

And actually, your whole book out of context could be. I mean, you had mentioned your concern to me that I might find it too pessimistic. Right. Yeah. So, anyway, we’ll get to all that. It’s really an excellent book.

So it’s a shame that I could only read it kind of one time through and then kind of revisit some sections. But I’ll be revisiting a lot of sections, I’m sure. I feel I will in the future.

We’ll get into everything. Well, lots of stuff in the book, but I thought we might start with, and I know we’ve spoken about this a little bit before, maybe offline and on the podcast where you came from and kind of how you got to where you are in the book because you were a neuroscientist? Yeah, I’m not sure if.

Do you use that word when people say, what do you do now? Or do you just leave it off the.

[00:05:54] Mazviita: No, I mean, since my professional affiliations been in philosophy departments, I haven’t described myself as a neuroscientist because that’s not how I make a living. But no, I started out my academic career in, I would say vision science would maybe be a more accurate description, because what I was researching on was the visual system on that intersection between psychophysics and visual neuroscience. So I did my PhD with David Tolhurst at Cambridge in the early two thousand s, and his research had been in classic primary visual cortex neurophysiology. But by the time I joined his group, he didn’t have the neurophysiology lab. His experimental work was in psychophysics. But the point of what we were doing was to model the psychophysical data in terms of what was gathered from the neurophysiophysiology, about the responses of simple cells to grating contrast.

He still had a collaboration with some neurophysiologists, and we were reading the v one literature. But yeah, I wasn’t in a wet lab myself.

[00:07:08] Paul: Yeah. Okay.

And you made the move to philosophy. And I have a personal interest in this, selfish interest, because when I was thinking about going back into academia, I considered the move to philosophy.

I’m back in, I guess what you would call a computational neuroscience lab.

But recently I had coffee with Dan Nicholson, who’s been like a process philosophy person. He’s been on the podcast. He came through Pittsburgh and I had to take a bus to meet him out. You probably know where the philosophical archives are in Pittsburgh. There’s like a building that. It’s off campus. Anyway, yeah.

[00:07:51] Mazviita: I’m embarrassed to say I never made it out there.

[00:07:55] Paul: Well, he was really enjoying himself. And he had spent all day sifting through looking for these letters of correspondence between these two scientists decades ago, trying to suss out this historical context and the history and philosophy of science. And I thought to myself while chatting with him, and I really enjoyed our chat. I thought, would I be happy sitting in an archive, going through, essentially sitting in a library most days, searching for letters that have something to do with the topic that I’m honing in on? Or would I be happier doing what I’m doing right now, which is like a lot of data analysis and stuff, and I don’t know the answer.

I’m assuming that you’re glad that you moved to full on philosophy.

[00:08:45] Mazviita: Yeah, that’s right. So my undergraduate degree was in philosophy and psychology, and I did, for most of my degree, prefer the philosophy to the psychology, except into the final year, where I got to do a research project with the late Tom Choschenko, who’s such an amazingly enthusiastic teacher about everything to do with vision science. It was really a great experience to have an autonomous research project and actually go to conferences and present that. And he was at the time, collaborating with David Tolhurst. And so that was what brought about the opportunity to go on to see the PhD there. So I was kind of surprised, actually, by that choice that I made in the end, to pursue vision science over philosophy. But then the philosophy bug didn’t quite go away when I was doing my PhD. So I had always contemplated going back and actually combining the two, because one of my interests in psychology and neuroscience was always to do with philosophy of mind and seeing how the science could be relevant to those old philosophical questions about the nature of perception. How is it that perceiving the world enables us to know anything? And that’s what brought about my previous project in philosophy, which was about the problem of color and the reality of color that we perceive, and trying to use those recent scientific results about the integration between the color visual system and other parts of the visual system to argue for a particular philosophical theory of color, which is, broadly speaking, an ecological one. It’s also a processual one.

[00:10:31] Paul: And relational essentially, as well.

[00:10:33] Mazviita: Right, sorry.

[00:10:35] Paul: And relational as well. In terms of philosophy of mind, it’s.

[00:10:38] Mazviita: Arguing that colors are relational properties that to do with the interaction between the perceiver and their environment.

So I’m arguing in that book that there is a way to sort of leverage scientific results to draw a philosophical conclusion. And actually, the new book is to some extent, an argument with my former self, because I’m actually now focusing on something that had interested me a long time ago when I was working on primary facial cortex, which is how amazingly simple these models are compared to some of the things that experimentalists find out about the brain. This kind of. This mismatch between the raw empirical picture of the brain that we get and the very elegant computational models, and then this leading me around about way to the conclusion.

Given the amount of simplification that is there in the theoretical side of neuroscience, this raises some problems about how philosophers are trying to draw on theoretical neuroscience in their conclusions.

[00:11:45] Paul: In philosophy, it’s such a mixed up bag because you’re drawing on neuroscience to talk about how philosophy shouldn’t be drawing necessarily on neuroscience. And in a sense, from what you describe, you’ve always had 1ft out of the door of the empirical sciences. Because I was going to ask, my guess was that you kind of moved from a more, what is now, like the common neuroscientific standpoint of computationalism, that the brain makes computations, that you moved kind of from that perspective to a more organismic, holistic perspective. But I guess I’m wrong a little bit. You’ve always been a little bit hesitant, it sounds like, to join the functionalism computationalism club in neuroscience.

[00:12:33] Mazviita: Yeah, it’s been a mixed bag. I think it’s probably something to do with, on the level of psychology. I was psychology of vision.

I was sort of trained by people that read Ma and Gibson and tried to integrate those two.

[00:12:53] Paul: You’re not supposed to do that. You’re not supposed to do that.

[00:12:56] Mazviita: That was how Tom Toshanko taught his lectures. It’s like these are these two very valid approaches to vision, and a lot of the work that David Tolhurst was doing was on natural scene perception. And the whole motivation for doing that goes back to Gibson. So ecological psychology and the visual system was something that interested me for a long time. But at the same time, yeah, I was pretty much a paid up Marian in thinking. Yeah, just look, beginning with the level of computation going down to algorithm and then seeing how this could be done implementationally. Yeah, that always made sense to me.

Until recently, thinking more about the limitations.

[00:13:39] Paul: Recently, this book has been a work, like all books in the making for you say recently, but it’s been multiple years. Right. So you retouch on a lot of the ideas in the past seven decade, maybe, of papers that you’ve written that you have been building up to this.

[00:13:57] Mazviita: Yeah.

[00:13:59] Paul: All right, well, let’s go over the sort of the overarching principles or ideas in the book, and you’ve already touched on some of these, and I’ll say some things, and then you can correct me, because the way that I’m going to say it, I’m sure is wrong. But one overarching, overarching theme is that essentially, it’s like a big mistake to maybe not big. It’s a fundamental mistake that neuroscience makes in taking a computation or a model as real, when in fact, it’s always a simplification.

And this is almost. So I jotted this down as a note, but that neuroscience really should be thought of as aiming to understand, and maybe we can talk about the difference between understanding and control, aiming to understand the technology that it’s dealing with as a model or as a stand in or an analogy for the real thing. It’s actually aiming to understand that technology, not the brain as it is in itself. And there are multiple other messages broadly in the book. But how did I do? And then what would you change and add?

[00:15:10] Mazviita: Yeah, I mean, that certainly sort of captures the broad themes that I’m pushing. For one thing, I would say about this point that neuroscientists are making a mistake. I wouldn’t put it exactly like that in terms of being a realist about their models. So by being a realist, taking the models to be sort of literally and factually descriptive just of how things are going on with the brain. I mean, I think of this question of realism as if you like a metascientific issue.

It can be fine within your scientific discourse just to not have so many of these philosophical quibbles about.

Is this representation literally true of what we’re representing? You can just take it for granted that if it’s empirically adequate, then it’s giving you an adequate representation of your target. But my point is not to sort of reform how neuroscientists talk, but to say that if you’re a philosopher and you have questions about what is perception, what is decision making, and you’re going to the neuroscientist and you’re looking at those models and saying, these are literally and factually non idealized descriptions of what’s going on in the brain, then you will be misled because you’re ignoring the distance between the brain complexity, which is actually there, and the amount of effort that’s gone on amongst the scientists to strip away that complexity and present you with a neat, simplified model.

And I think the reason why this issue hasn’t been salient to people is, I think, a quite a common view, and it’s a view that I used to subscribe to, is that when you do have a simplified model in science, what you’re actually doing is discovering some simplicity, which is inherent in the thing that you’re modeling, but kind of masked by all this surface complexity which is there in the data.

And I think I’m just raising the question what justifies us in assuming that given that the brain, the more that we probe it experimentally, the more sort of levels of complexity seem to be emerging? I don’t think we should be confident that the simplified models are targeting underlying simplicity so much as we are imposing a simplification for our own purposes on this object.

[00:17:56] Paul: And of course, you talk about how this doesn’t just apply to studying things like brains in their infinite complexity, that science always does this, but it’s particularly salient in the neurosciences.

I read it as a neuroscientist, I think. And what you’re saying is that, is it primarily aimed at philosophers? Because what you described as philosophers, which has been a popular thing for the past few decades, I guess, looking to science and to ask how scientists are doing things and then to use whatever they find there and bring it into philosophy.

Caution should be taken there, because to not believe that scientists are on the right track in terms of what is real.

[00:18:45] Mazviita: Yeah.

[00:18:46] Paul: Anyway, there was a bunch of questions, I guess, in there. So you’re talking to philosophers in this book, is that.

[00:18:52] Mazviita: No, I wrote it for sort of both parties, but I would say that I do this paraphrase of Marx in the introduction. So the point of the book is to interpret neuroscience, not to change it. So I’m not telling neuroscientists to change their business, but for anyone that wants to interpret neuroscience, which will be both philosophers and neuroscientists themselves, when they’re speaking to the public or just reflecting on what these results mean for our understanding of the mind in general, people in general, then these are the cautionary tales that need to be taken on board.

[00:19:30] Paul: I mean, as a neuroscientist, when you read this, there are many ways to interpret it. One could be that you’re sort of patting neuroscience on the head, saying, you guys can keep talking about whatever you want to talk about, but you’re actually not talking about something real. You’re talking about your little models, and you’re saying, and it’s fine to keep doing that. And you are saying that. And, in fact, you justify it, and you advocate for a lot of the ways that neuroscientists speak about things like representation and models, but at the same time, you kind of destroy something that seems somewhat precious, perhaps, to a lot of neuroscientists, that what they’re really doing in studying these models is understanding the real brain.

[00:20:17] Mazviita: Yeah, we should say more about this issue of realism. So, yeah, there’s a long standing debate within philosophy of science about the question of realism, which is the question of whether scientific theories, models, other representations, are hitting upon some underlying reality, which, if you like, goes behind the data. So in the way that a theory which posits electrons is actually going beyond just summarizing experimental data that physicists observe in a lab and actually sort of hitting upon this unseen world of particles. So that’s how the realism debate is normally figured. And antirealists or empiricists were sort of skeptical about even the existence of electrons. I’m not yet an antirealist in that old sense of saying, well, what neuroscientists are depicting doesn’t exist.

The view that I’m arguing for, I call it haptic realism, because I’m saying that, sure, when we’re doing neuroscience, neuroscientists are interacting with an actual object that is there, the brain, neurons. No one denies that they exist. No one denies that they have lots of the properties that could be measured experimentally. But what I’m stepping back from is the standard scientific realism, which says that when science has fully do it at its best, when it’s got lots of experimental data in favor of it, when it’s a mature enough theory, we can take that scientific representation to be giving us the literal truth of how things exactly are in the system of nature. And I’m saying that we shouldn’t go as far as that precisely because if you look at all of the process that goes behind science, what you’re having to do in order to generate your theories and models is strip away from complexity.

How you go about doing that will be determined by your own prior interests, your own prior theoretical commitments. There will be multiple ways that you can strip away complexity that give you different perspectives on that one same target system.

And so what you’re left with is not a representation of exactly how things are in the brain, independently of your choice as a scientist to idealize and abstract in specific ways. And so, if one of the things that your model is telling you is that what perception is is essentially a certain series of computations, I’m saying that shouldn’t be read at face value, because the model that you have of perception is computational, in large part because that’s a convenient, simplifying strategy, it doesn’t mean that you have found an inherently computational system within the brain, something we touched upon in the last podcast that we did with Spiebeck.

[00:23:25] Paul: Right? Yeah.

Well, okay. And I want to define haptic for everyone, since you use the term haptic realism, which.

Well, and maybe you mean it in a certain way, but it means having to do with touch. So it’s almost like an epistemic. It alludes to the interactive, the physical kind of interaction that scientists have with their technology. In doing these research questions, you can.

[00:23:54] Mazviita: Think of the standard scientific realism that I’m opposing as saying that science gives us a window on reality. So when science is at its best it’s like it’s wiped away all the distortions. It’s completely transparent and clear. We look at nature just as it is, independently of any of our decisions about how to represent it.

Haptic realism shifts to this metaphor of touch, where when you touch something, you can’t deny that you’re part of the thing, that part of the system that is being modeled, if you like, you have to acknowledge that the very choice that you have to go out and discover this object through touch makes a difference to what you end up knowing about it.

Another thing about the haptic metaphor is that your hands are not only sensory organs. They’re also the means that we have to actually affect changes on the world. And I think we should think of science as having this double faced nature. Science is epistemic. It’s about knowledge. It’s about discovery. But it is also, in the biological sciences, inherently technological. And it’s sort of driven by very many technological ambitions, often therapeutic, often more blue skies things that people are going into their research with the idea that they want to use this knowledge for affecting certain changes, maybe not immediately, but long term. So I think thinking of scientific knowledge as giving us this haptically real knowledge just makes us remember that double faced nature of what we’re doing. So it’s not like we’re just clearing the window on reality, just sitting back. This is how things are, independently of our own interests and choices and long term ambitions.

[00:25:47] Paul: Along those same lines. So then there is no fundamental objectivity, because the scientist is always bringing his or her interests and perspective and tool making and strategies. And these, in essence, mold their questions into the questions that they can answer, because they need to be able to mold them. And so there is no objective window into reality, in that case. No scientific realism, I suppose.

[00:26:18] Mazviita: Yeah. I mean, it depends what you mean by objectivity. If you use contrasting it with subjectivity, I would sort of hesitate there, because I’m not saying that science is subjective in that it’s like the whims of the scientists. What they feel like discovering in the brain is what they will discover. Objectivity, in a more modest sense, of being, like, intersubjective, corroborated, cross checked in a rigorous way.

It certainly has that quality when science is working properly. There are so many layers of verification and evidence that need to be brought to bear. But that doesn’t mean that science transcends, if you like, the human standpoint. And we see things with a God’s eye view. If that’s what we mean by objective, then I would say, no, it’s not objective.

[00:27:08] Paul: You always have to be careful with your words. With a philosopher. Independent, like you said, is what you’re more comfortable with. Yeah, you just said, when science is being done properly, that’s never the case, is it?

[00:27:23] Mazviita: No. I mean, there are just more and more worries that people have about fraudulent science.

[00:27:31] Paul: Oh, yeah, sure.

[00:27:32] Mazviita: So, ideally, everything should be cross corroborated, but it’s not possible, practically speaking, for that to happen. And so there’s stuff in the published record which is fraudulent.

[00:27:45] Paul: Yeah, sure. Okay. Well, so maybe we could shift along these same lines.

So we have this idea that scientists are always bringing their own interests and perspectives to bear, and you talk about three simplifying strategies. So the whole idea is that we’re always, as scientists, simplifying in order to understand in some respect, even if when we’re using these really complex models, there are lots of simplification still in these models, you discuss, like, three different simplifying strategies that scientists use, whether they know it or not.

Could you discuss those three simplifying strategies?

[00:28:24] Mazviita: Yeah.

So I begin with the point that quantification itself. So using maths to represent things in nature is inherently a simplifying strategy. So this is an argument that goes back to various people. I thought just early 20th century people like Bergson and Whitehead. But actually, I was reading something yesterday by Michaela Massami here at Edinburgh, and she sees this point in Spinoza, though she’s not making the use of it, that I will. But the point is that whenever you make the decision to count a series of objects, you’re making the decision that the similarities between those objects are what matters and any differences you can ignore. So the very idea of mathematically representing something is depending on your prior decision to say that, okay, there’s a whole bunch of observable variability in what we’re measuring or counting, and yet we’re going to abstract away from that and say, these chickens are all just chickens, right? Or these neurons are all just neurons, or these spikes are all just spikes. But in biology, we can observe there’s sort of variation all the time. And so when we, especially in biology, and I don’t think this is necessarily an issue that comes about in physics, like the home of quantification design. But anyway, when we quantify and mathematicize in biology, we are always abstracting away from a lot of the variability which is there in the system. So I’m saying that, by itself is a simplifying strategy.

If you think of statistics, it’s a branch of methods that sort of goes back to the state. The word state is the root of statistics. So if you think of a society that as a state you’re trying to manage, there’s so much going on. You don’t have spies everywhere. How do you know what your population is doing? Whole bunch of methods to do with, like there are certain numerical values that you need to keep track of, like births, deaths, amount of food that your country consumes in a week. Certain things need to be measured and you need to average over them so that you can make sense of those data.

But there’s obviously way more going on in a society than the statistics that the state that can gather are able to capture. So you can think of that as a nice example where it’s very clear that there’s a mismatch between the inherent complexity of the system, all of the details and processes that are going on there, and the mathematical representations, which are very effective for certain management purposes. But you can’t say that they capture everything that’s going on.

[00:31:21] Paul: Management purposes like control, you mean?

[00:31:25] Mazviita: Yeah, well, just planning. In the case of, say, I’m not.

[00:31:28] Paul: Saying, well, I don’t mean control in terms of like government controlling or something like that. I just mean in the way that you use control versus understanding in a scientific agenda. Right. So you can use these abstractions, like counting, to talk about the number of chickens that need to be hatched per week in a given population size of x to maintain stability or to throw the stability into chaos or something. And that’s a way of controlling it, would that be right?

[00:31:59] Mazviita: Yeah. So for prediction and control of the variables that you’re tracking. Yeah. These are methods that are indispensable because if you just try and sort of take in all of the raw, non quantified perceptual data, you wouldn’t be able to work with that.

[00:32:20] Paul: Yeah. Okay. So that’s mathematics, and you talked about stability and states. And so we’ll come back to that because that’s an interesting topic in itself. So that’s strategy simplifying strategy number one.

Another one is just reduction in itself.

[00:32:37] Mazviita: Reduction, yeah. I mean, so reduction has obviously really been important approach in biology in the.

[00:32:46] Paul: 20Th century, like the most important almost maybe besides math.

[00:32:53] Mazviita: And it’s not always thought about as a simplifying strategy, but if you sort of reflect a moment about what’s going on when people reduce or what the motivations for it are. So if you’re faced with something like a mouse, which is a complex system, there’s a whole lot going on with a mouse or a mouse brain, and then think, well, how can I begin to understand a system of this complexity? If you allow yourself the reductionist assumption that knowing what the parts do, knowing the properties of the parts, is these are the building blocks for knowing the operation of the whole system, then you will give yourself a workable research project, which is like, okay, we’re going to look at the smaller, less complex units. So mouse brain, individual neurons, or if you talk about the whole mouse looking at cells of different organs and then seeing from what you can discover about those individual parts and relative isolation, you might get some targets of intervention. So if we’re looking at liver cells, you might sort of find some targets of intervention, of how to ameliorate certain conditions with the liver. Or, if you’re the bet with neuroscience and reduction was that by looking at individual cells or small groups of cells, and we would find out how different, almost cognitive capacities of the brain come about. So, assuming that the relevant operations were there not as an emergent conglomerate thing out of the systemic interactions of the whole brain, but actually by looking at the parts in relative isolation, you could make progress towards the relationship between brain and mind.

So I’d say it’s a simplifying strategy in the sense that it’s a way of dealing with a complex system.

[00:35:06] Paul: Yeah. There’s also the reduction in the other sense, in terms of the preparation of your experiment. Right. So reducing the, for example, in many animal experiments, reducing the mobility of the animal into like, so that you’re essentially to control as many variables as possible. And that’s been a reductive approach that has been super, quote unquote, successful or a major force in the sciences and has led to lots of progress using these terms lightly. Right.

[00:35:36] Mazviita: Yeah. So that’s another mode of reduction that I talk about as well.

And it’s something that I first started thinking about, actually, in my earlier graduate work, because one of the things that we were doing, most of my experiments were with sinusoidal grating.

[00:35:58] Paul: So it’s a reduced stimulus, insanely reduced. It’s kind of crazy to think, right. It’s like the highest control that you could have. So a sinusoidal grading is just like this. You could think of it as, like, kind of blurry light and dark bars that you put somewhere in front of people or animals or organisms, and you try to put it in an area where you know that the neurons respond to best, and then you can change the orientation of the gradings and how thick the bars are. So there’s a lot of minutiae and we don’t go around at the grocery store reading barcodes and things. In fact, we’re kind of bad at it. That was a terrible example. But it’s just not a natural stimulus.

[00:36:43] Mazviita: No. So it’s not a natural stimulus. But, I mean, the theory at the time was that neurons in the area sort of selectively responsive to these, and if you like, these were the building blocks of responses to complete scenes. But the problem with this that we were confronting in the research at the time was that when you took the models that work well for the reduced stimulus, the gratings, and then applied them to responses that you get when someone would look at a natural image, and these are black and white, they didn’t even have color, then you would see the limitations of those models. So I think when you have reduction as a simplifying strategy, it can work well within the confines of the controlled situation. But then the question of translation to uncontrolled responses, then you can’t be sure.

[00:37:40] Paul: Since you mentioned the gradings, it’s always bothered me because I came up in the visual neurosciences, and these gradings have always bothered me because of what we’re talking about. And it seemed ridiculous.

And I’m not knocking anyone that’s studying gradings, but there is a large trend of people who have studied gradings in the cognitive sciences. Right. It started out well. This is going to say something about how we build up images, but then what you end up doing is studying the neural properties and the neural correlations that have nothing to do necessarily with behavior or the complexity of cognition that you’ve wanted to study in the first place. So then you end up studying more and more reduced levels, I suppose.

[00:38:23] Mazviita: Yeah. So this really sort of gets to the heart of why I’m saying that there needs to be a warning about how philosophers of mind interpret the results of neuroscience. Because when philosophers of mind, they’re interested in something like decision making, will, perception and action, they care about what goes on outside of the laboratory. They care about what people and animals are doing in their rich, socially mediated lives. And so if they don’t keep in mind that the version of perception and action or the version of decision making, which is what is being properly modeled and understood within neuroscience because of the need for using controlled and reduced setups in your experiment, if they don’t keep in mind that there’s a mismatch there, they’re just going to think that, oh, well, what decision making most rigorously is is what is modeled in this way. And it does look more rigorous because you have all this math, and you can show quite neat results if you use those methods. But it doesn’t mean that it really translates to what people do, what they are in all of their richly integrated psychological lives, where decision making, perception, and action are not just like one independent sliver which you can isolate from the rest of what is going on.

[00:39:48] Paul: So you’re saying that philosophers, and I would add, news outlets, also should not just read the discussion sections at their face value in neuroscience papers, because it’s those discussion sections and abstracts and introductions that talk about decision making, for example, as if it’s the same decision making in the field and in natural behavior as it is in the lab.

Is that the case? Or what is the answer? Should philosophers then, who want to glean something from a science, then become super experts in that science? Or should the science always put asterisks on the terms that they use or somewhere in between?

[00:40:30] Mazviita: It’s tricky. I mean, most of the philosophers doing this kind of naturalistic work in philosophy of mind are now quite, well, adapt to the science, and you can sort of read the results.

[00:40:42] Paul: But I can barely understand so many scientific papers that are, like, just adjacent to my field. So how could you expect a philosopher to really see the.

[00:40:52] Mazviita: Well, yeah, I mean, in terms of the depth of knowledge that you’d had as a working scientist using those methods? No, but knowing enough to not only read the discussion at face value and realize that there’s some methodological complexity behind it, I think that’s like the standard in the field of philosophy of cognitive science, naturalistic philosophy of mind. Now.

So I think this issue goes deeper. It’s not just a question of ignorance, but it does go back to this assumption, this maybe convenient assumption that people have, is that what the reduced method is tapping into, what the simplifying strategy is allowing you to access, is this essential underlying simplicity which was there all along. So if you think that decision making essentially is this thing that you can discover by stripping away all of the interacting factors that come about, when people go to the shops and they’re doing all kinds of other distracted things, then you think, aha, we’ve found the core of decision making in the lab, and now we can be happy with that. But if you drop that assumption, then it becomes moot. So I think that’s where the discussion really needs to be had. And I think this is, properly speaking, of philosophical discussion. It goes back way into the history of classical philosophy. Questions about essences, questions about the shifting nature of appearances versus the underlying stability of reality. So this notion that reality is more simple than it appears to us is actually quite an old philosophical notion. Yeah.

[00:42:45] Paul: I mean, decision making, let’s be clear here, it’s just a stochastic variable that goes to one of two bounds, right? That’s the way decision making works.

[00:42:56] Mazviita: Yeah.

[00:42:58] Paul: So we got off a little off track there because of me, of course. But we’re still on the three simplifying strategies, the third of which is analogies.

And analogies and metaphors, aren’t they just how we think? Don’t we rely on them to think anything?

[00:43:16] Mazviita: Yeah. So we do. So the question of analogies, in the sense that I’m discussing it here. So it goes back to the point that we maybe didn’t explain enough for your audience about. Am I saying that neuroscientists just understand their technologies and not the brain itself? So the relevant analogy here is the analogy that’s drawn between biological brains and computers.

And I’m saying that analogy is a simplifying strategy. So what you’re doing there is you’re taking an inventive system, a computer, digital computer, which is relatively well characterized compared to a biological brain. And by lining up the similarities between those two systems, you can use that relatively simple machine as if you like, a model or a proxy for what you’re trying to leverage in your understanding of the brain. So by drawing on analogies between things that are very, very complex and not very well understood and things which are relatively simpler and better understood, it’s a common scientific strategy for just sort of bootstrapping up from where you would be otherwise if you were just starting with something that doesn’t make a lot of sense by itself. So if you can think of analogies as giving you a lens through which to highlight certain relationships which you would be maybe more murky or hidden in the full complex system, but making you sort of highlight, aha. There’s a certain bunch of relationships that we see in the relatively simple system, and it’s just similar to what we find in the complex system. So let’s just focus on those similarities, ignore all that background complexity, then that allows you to make a certain amount of progress.

[00:45:16] Paul: But is there a phrase jumped to mind, like the analogy trick or something? Because you start off right, you can start off with good intentions and saying, well, we’re going to focus on the similarities. But then over time, I don’t know if it’s because our brains aren’t as awesome as we think they are, and we’re just lazy and we need to simplify. But then the analogy gets replaced with a realist view about the analogy becomes reality. Force over, like, time, it slips into a reality. Is that a thing?

[00:45:49] Mazviita: Yeah. So that’s what I think has happened with the computational brain. At least that’s what I’m arguing has happened, because I see that there are good reasons for once the digital computer was available as a machine, and notions of information processing were being theorized with a machine to use that as a lens through which to try and figure out what’s going on with brain physiology.

But there’s been this tendency to raify the analogy or say that, or just ignore the differences between the two, which I think is starting to be misleading.

[00:46:33] Paul: So it’s interesting that you use the computer as an example here, and you’ve written about this plenty. One of the things that you write about in the book is, and you just mentioned it, that once we had this physical computer machine, we could then use it to analogize to brains. And then there’s this trick that happens over some sort of time where you get more and more comfortable with that analogy, and then it becomes the real thing, computers. So you argue in the book that technology essentially precedes science, right? So science has to have an object of study, and so often, like, a technology will come about like a computer, and then science can look to that and then use their simplification strategies and say, okay, well, brain is a computer, for example. Although in this case, the computer was invented with some inspiration from science. Right. What we understood, quote unquote, about neurons.

[00:47:31] Mazviita: Computer is an interesting case, because if you think the Turing machine, it’s a machine that is supposed to duplicate the work of a human computer. So a person saying a human computer doing calculations. Yeah. So it’s a model of a particular formal activity that people were doing, but a machine invented to duplicate that.

[00:47:50] Paul: That’s interesting, because in that case, the technology is the human doing the computing. Right? Yeah, well, that’s what. Yeah.

[00:48:00] Mazviita: Anyway, so I think it says something important about what machines often are, which is labor saving devices. And the labor is originally the labor done by an animal, if you’re talking about an ox pulling stuff and you get a steam tractor, or it could be a human doing mental calculations.

[00:48:18] Paul: But your argument in the book is that understanding always comes after there is some physical, simplified physical object to understand.

[00:48:30] Mazviita: Right.

To say more about this thing about technology preceding science, and then why that relates to understanding. So one of the classic examples of this is in 19th century physics. So how steam engines preceded the invention of thermodynamics.

So engineers tinkering around with machines, getting all of these devices working, and scientists coming along, often with the aim of theorizing those machines in order to increase efficiency. But thermodynamics was able to make the progress as it did, because the machines provided a model system with which to explore thermodynamic relationships. Whereas if you just go out in nature and think of how does energy transfer happened? It’s too uncontrolled. You can’t just do that. But once you’ve got a machine, then you can precisely use that as an inspiration, maybe even for laboratory studies, where you can do things like track conservation laws or write down conservation principles. So the point is that nature is by itself is unconstrained. Very often machines, if you like, the stepping stone needed to get from unconstrained nature to model systems in the laboratory that are the starting point for new theoretical development. So that’s where you see this pattern often in the history of science, not just with thermodynamics, maybe mechanics in the 17th century, and I’m arguing with cognitive science in the 20th century. You had the computers, and then you had the more general cognitive science, which was supposed to apply both to biological and machine systems.

So then understanding is aided by the study of machines precisely because they are these simpler systems. And generally speaking, when you’re thinking about, well, what is it that scientists even doing basic research, get to understand, then? My point would be, strictly speaking, what they understand is their lab created system. And then there’s the expectation or the hope that this knowledge will apply in the unconstrained realm. In physics, it tends to work quite well, because physical objects are not particularly context sensitive. So if you get a few relationships figured out in the lab, you’ll often be able to find enough similarity with how those things behave outside of the lab, with mice, with people. You can’t always make that assumption that what you are discovering in your heavily controlled lab situation or in your completely non biological machine setup will transfer in unconstrained situations. And so this question about understanding, being tied to having a technology or machine, or having a very controlled, almost machine like set up in your laboratory, that’s.

[00:51:46] Paul: Behind this .1 of the downers in the book, to me, is that I’ve frequently mentioned, well, I want to understand, but what you’re saying is that scientists role is not to understand in that respect.

Well, understanding the thing in itself, they’re understanding the technology, right? But the role of science is essentially manipulation and control.

And then I thought, I shouldn’t be in science, maybe studying this complex thing.

So what you’re saying, though, is that I can’t understand the thing itself if I apply the scientific methods of simplification.

[00:52:26] Mazviita: Yeah.

If you accept that the thing itself will have all of these dimensions of complexity, all of these various properties which are not revealed in that simplified laboratory preparation. I mean, this point about sort of what is made, what’s a technology and what’s not, I think in biology, it’s really interesting because it becomes a tricky question.

If a mouse is a knockout mouse, not a wild type mouse, is it an artifact, say, in a certain way? Yes. So the reason why biologists deal with inbred strains, knockout mice as well. But if you actually inbred strange is the more revealing example here is that they’re trying to get away from the flukiness that is just out there with wild types. They want their mice to be more and more stereotyped, because that is what will help them get the repeatable, predictable results, which will allow for a nice.

[00:53:38] Paul: Comprehensible model, lower p values also.

[00:53:41] Mazviita: Yeah, of course.

And so, in a way, just saying, okay, we’re working on inbred mice. You are working on a technology. So your understanding is targeted to that technological version of the mouse.

And when we’re talking about biomedical research, obviously, you’re hoping that that model system will reveal relevant things which will apply not only to wild type mice, but to humans. But of course, we know we’re dealing with a model system, and the model is never identical to the thing that you’re ultimately trying to target.

[00:54:19] Paul: I wonder if pharmaceutical research is a good example of this as well. I don’t remember if you mentioned that in the book, but almost all pharmaceutical medical discoveries are discovered by accident. Right? So you bathe the system with this particular drug, but then that informs how you think about how the system works. And then all of a sudden, you have a quote unquote technology in that respect.

[00:54:42] Mazviita: Right? Yeah. So that’s another example of a technology not coming about from basic science, but going the other way. So with people finding that certain drugs are antipsychotics, and then that leading. And then people looking at the mechanism saying something to do with dopamine, that leading to the dopamine hypothesis about what schizophrenia is.

[00:55:07] Paul: You used the term theoretical a few times when talking about this technology or some artifact needing to precede an understanding, using it as an analogy to try to understand other systems, but you’re really understanding the artifact. But the way that I read it in your book is more that it’s like that the scientific approach, the experimentation and perspective, is where the technology needs to precede that scientific understanding or control. But I was wondering where theory does come into this. And you mentioned you can’t just go out into the world and just sort of theorize about something in nature. But I don’t know, there is a ratcheting effect, right, of one and then the other and then the other. So can theory precede technology? Or does theory itself need to come after some objective, some art, of some artifact of study?

[00:56:09] Mazviita: I don’t know if that there’s just too much going on in the history of science for there to be one clear answer for that. But there were some historians of science that I was drawing on when I was writing these sections. So they were looking at the context of the scientific revolution in the 17th century, and they were pointing out that there’d been so much going on in industry in the late Middle Ages and Renaissance, and new methods of mining all kinds of things, which became the impetus for people that were interested in relationships to do with things in a natural world, to start investigating, investigating. I know, forces and optics and all these other things that with historical questions, we don’t know exactly whether cause and effect relationships, et cetera, et cetera.

It seems in the light of that, a bit implausible that a theory ever just popped into a scientist head out of nowhere, and then they got to apply it and invent a whole new realm of technology. But given that there’s been, I mean, technology has always existed in human society, and people interested in relationships between things, in nature, has always existed. It seems like they’ve just been growing up together from a long way. But what I’m really trying to resist is this notion, which takes things the other way around, which goes, we need to do basic science in order to foster technology. So making it seem as if it always goes from the other way, from the basic science to the technology.

[00:57:59] Paul: So it’s not that it always has to be the case, it’s just that there are a lot of examples historically that it’s the case. Yeah, I was going to ask you this sort of later as an extra thing, but one way that this can happen. So it’s interesting that a lot of advances in ascience are made because someone who is not in that science, someone who’s from a different area of science, comes in, visits the new area. And I imagine part of the reason is because they bring their own artifacts, their own studying the artifacts in their own science, and then make an easy and quick analogy to the new science, and then all of a sudden, there’s a lot of low hanging fruit around that particular artifact. Would you agree with that?

[00:58:47] Mazviita: Yeah, I think that’s often the case.

I think that’s precisely right. Having a different background gives you a different suite of analogies, which you can then often fruitfully apply to other new territory.

[00:59:03] Paul: So it’s still the same thing. It’s almost like cheating, because it’s not like you came in, you were theorizing in some space, and then brought your theory in. You’re really importing the technological artifacts that you were studying in that other space into another. It just seems like a good strategy, career strategy.

[00:59:21] Mazviita: Yeah, but we should think of the theory as being part of that ecosystem with the technologies as well. So when we’re talking about the use of technologies in science, they can have a very elaborate theoretical overlay with them. It’s not just like they’re gadgets that anyone can use. You need a sophisticated understanding of the principles of operation of a thing in order to employ it.

[00:59:47] Paul: Yeah, you’ve already mentioned you’ve made a few allusions to physics and, quote, unquote, simpler systems that are under study and the great advances in physics and successes in physics over the years. And this book is primarily aimed at more complex systems like brains and minds, which you get into, especially later in the book. But when I read the book, all of these principles seem to apply to all different levels of science.

My reading is that you’re concerned with these more complex sciences because they’re affected more by these principles or mistakes.

[01:00:31] Mazviita: Right.

[01:00:31] Paul: So can you talk about that division between studying simple, quote unquote, simpler systems and complex systems and sort of the history of that, and how these ideas affect the different types of systems that you’re studying in different ways?

[01:00:52] Mazviita: Yeah, there’s various things I could say.

One of the philosophers of science that really got me interested in the field early on, and I think sort of left an imprinting on my mind, is Nancy Cartwright. So she has this book from 1983, how the laws of Physics lie, which is one of the first books to really make idealization essential topic in their account of how the science is operating.

So she says that the laws of physics lie in the sense that if you look at the most basic fundamental laws of physics, they apply only, strictly speaking, in a very idealized, controlled system. And then to get them to apply beyond that, you have to do kind of tinkering and fudging and altering to get it to apply to the messy, non idealized system. So, strictly speaking, they only apply. They only are true of a very controlled system, which, as close as possible, conforms to the actual impossible standards of an idealization, like being frictionless, for example.

[01:02:04] Paul: Spherical cows, and I should say, the vast majority. Another reason why this is interesting and timely is because neuroscience has really become more integrated into society, and it’s just a bigger scientific endeavor. But the vast majority of studies in philosophy, regarding the history and philosophy of science, have dealt with physics, right? And it’s only kind of more recent times, although maybe in the past, what, 150 years, 100 years, it’s been applied to biological systems. But I think that the world of applying the philosophy of science to complex systems has grown and grown. A lot of the things that you allude to are some of that early biological philosophy of science. But then the vast majority is having to do with the physics, right?

[01:02:53] Mazviita: Yeah. So the amount of attention philosophers of science paid to science outside physics has sort of definitely been growing. But one of the reasons why the field was so physics centric for so long is that physics is the queen of the sciences. It has the most fundamental theories, the most rigorous methods, the most mathematically sophisticated approaches, or at least that’s how it was for a long time. Now, neuroscience is very math savvy, as.

[01:03:22] Paul: We know physics envy.

[01:03:24] Mazviita: Yeah.

But then it’s interesting to ask, why was physics, why did it make such rapid advances? Why did it become so impressive? And often, physicists will admit, we have the easy objects. We study things that are not complex in the way that biology is.

And a lot of the complexity of biology is precisely to do with the context sensitivity of biological organisms. So electrons aren’t phoenicity. They’re not going to behave differently whether the blinds are a different color that day.

[01:04:03] Paul: That’s what physics says, anyway. But we don’t know that for sure anyway.

[01:04:08] Mazviita: The gap between the predictions that the physicists make on the basis of the assumption that they’re not phonicity, and then what actually gets predicted is true. But in psychology, as much as you try to control your variables, there’s going to be factors that are affecting your object of study, which you might not be aware of. And when you go from one country to next, electrons don’t change their characteristics. People do.

And then. So human psychology is sort of at the extreme end of that context sensitivity. But everything in biology is to some extent. So we’ve recently been reading into this literature on plant behavior and plant cognition, if you like. This is the controversy. Do plants cognize.

Yeah, plants do show a very pronounced amount of behavioral plasticity in ways that you’d be surprised.

[01:05:05] Paul: Behavioral plasticity, I want to emphasize. You said behavioral because.

[01:05:09] Mazviita: Yeah, exactly.

The use of such terms is controversial, but it’s precisely because plants show this plasticity, which isn’t what you’d expect, on the assumption that they’re somewhat machine like in how they operate. And one of the things that I noticed in one of the papers is that if you want to see the full amount of plasticity in plants, you have to go to the world type, because if you look at agriculturally bred species, you’ll find less of this, because obviously people want their wheat crops and their corn just to do the same thing again and again. And you can control the field so that it doesn’t have to rely on its innate plasticity to deal with things like drought and nutrition levels varying and stuff like that. But the organism in its unconstrained ecology has to have a lot of plasticity in order to deal with the vicitudes of life. So I think that this context sensitivity and plasticity is very pervasive in biology, but it means that the problem between the mismatch, what you can show in the lab and what will go on outside of that is going to be present in a way that you won’t find in physics.

[01:06:32] Paul: When you were talking about plants, I immediately thought, if you raise a plant inside with no wind, it’s also going to be less adaptable or plastic. Right. So you need to take it outside every once in a while, let it experience weather if you want it to grow a strong, healthy plant. So there’s always, in some sense, it’s just. It’s hard to control for anything. Right. Yeah.

Especially with complex systems. Right. You said that psychology is kind of at the one end of the height of complexity, but then isn’t there sociology, quote unquote above that? Would that be. I don’t know how you think about that.

[01:07:09] Mazviita: Yeah. So thinking, like, an extreme example of if you’re speaking one subject that you can put in a lab, you can’t put a society in a lab. But, yeah. So this notion that the sciences get ordered by levels of complexity, that was put forward by August compt in the 19th century. He’s the father of positivism. He also coined the word sociology. So he said that physics studies the things which are simplest and the laws are most general precisely because the things that they study don’t change their behavior depending on various things that you do to them. But then you build up from there to chemistry, to biology, psychology, and then sociology at the top. That was his model.

[01:07:57] Paul: And soon, galaxies of organisms, right. Well, maybe I lost my train of thought where I was going to go.

In large sense, the modern boom of dynamical systems theory in neuroscience is imported from physics, essentially.

So I’m interested in a processoral perspective in studying these things, and I struggle to figure out how to do it. And dynamical systems is an inch toward that. Right. Because it treats systems not as stable things, although it kind of does. It still reduces to a state. Right. And you can have a trajectory through a state space, and you’re still defining a space, which is a mathematical, abstracted, stable, static thing.

So it’s not exactly the full kind of holistic, process based approach that I want, but I’m not sure if it’s as good as we can get or if it’s just like the latest kind of push.

Are scientists bound to study static?

So you talked about how just counting is an abstraction to a static thing, as if it’s a thing. Are we bound to that approach? Can we not study things in a different approach? Do we always have to study things as if they’re static?

[01:09:27] Mazviita: Yeah. So what I’m talking about in the book there is how one of the ways that the brain is extremely complex and challenging to investigators is that it’s always changing. It never returns to initial conditions, like how you treat a mouse one day will affect it a long time in advance in ways that you don’t know. That’s just the nature of what it means to be a living person, living thing with a brain, at least. We see from our own examples that all of our memories what happens from day to day, it means that we’re never quite the same person going forward. And I think that’s probably true in animal life as well. So there is this inherent changeability.

It makes sense if you think about what brains have to do, which is deal with a constantly changing environment.

Like I said, with the plant, to survive out in an unconstrained ecosystem, they have to have all of this plasticity, and they’re just stuck in one place. Animals, again, they have to move around and deal with novel situations all the time. So experience is always affecting us.

But in science, so much of the drive is towards inductive knowledge. So actually being able to use past experience in order to predict what happens in the future. So going from we’ve found property x causes Y in the past, we want to see how stable that relationship is. So going forward, when X happens, we’ll see that Y will happen.

Maybe most of science has that inductive character. You’re looking for stable regularities that you can expect to hold in the future.

Hume’s problem of induction just asks the question, how do we know that induction will work? Well, it always worked in the past, but that’s just applying induction to say it worked in the past, so it should apply in the future. And he says that what we’re banking on really is the assumption of the uniformity of nature. He calls it, we just assume that nature is stable enough for us to be able to reliably base future expectations on past experience. Now, obviously, this works enough most of the time for us to get by, but the question is, when we’re wanting to do very precise science of changeable complex systems, maybe we start to see that principle of uniformity of nature not quite work for us in the ways that we want it to. And maybe we start to see some limits of how much knowledge that we can acquire through induction and how much prediction we can gain through that.

[01:12:29] Paul: On the other hand, it’s impressive in complex systems that you do get reproducibility in many respects and stability, right?

[01:12:37] Mazviita: Yeah, that’s right. I mean, if we look at animal behavior, it’s not completely unpredictable, chaotic, or anything like that. So what I’m talking about with complexity here isn’t just stochasticity or chaos, but what I’m talking about is sensitivity to factors which cannot be anticipated. And you cannot do almost the amount of characterization that you have to do to be able to make very rigorous, inductive predictions, is kind of inexhaustible.

[01:13:18] Paul: Yeah.

One of the things that I wanted to ask about, and this goes back to the very beginning of our conversation, is your trend toward. So you argue that we should think of organisms as being embodied and embedded in nature. And there’s this ecological psychology kind of bent that you have.

Maybe you can comment on that. At the same time, you say that maybe the four e approach, the embodied, inactive, embedded.

What is it?

[01:13:50] Mazviita: I can never get the embedded ecological, ecological.

[01:13:54] Paul: Thank you. The four e approach is something that philosophy maybe can work with, but it’s not something necessarily that neuroscience itself can work with. So that’s what I really wanted you to sort of extrapolate and discuss.

[01:14:11] Mazviita: Yeah.

The way that I argue for that point is to look at how, in order to simplify and come up with workable, rigorous models in science, where you have a small number of well, constrained variables, and you can show the quantitative relationships between them. You need to impose boundaries around a system. You can’t go along with the idea that this system is somehow.

This item over here is somehow boundlessly interacting with everything around it. You have to treat things more as far as possible as closed systems to do that rigorous theoretical work. And I’m saying that the core insight of the four e tradition is actually to appreciate that what it is to be a minded being is to be somehow boundlessly interfacing with your environment. So the brain is sort of boundlessly, richly interfacing with the rest of the body, and the body is richly interfacing with its environment. And that’s what cognition is. It’s this very rich, interactive process. So I’m saying there’s this trade off between having that, in fact, acknowledging that and actually wanting to do rigorous science, which means you need to start enforcing boundaries from things and treating the brain as if it’s more or less in isolation from the rest of the body. So you’re ignoring vascular, you’re ignoring immunology, you’re ignoring most of neuroscience, the rest of the brain, and just looking at the circuit in isolation.

You need that to do the kind of modeling work that sets the standards for rigor in theorization.

But you cannot then hold on to that insight. Well, actually, what’s really going on when things are cognitively interacting with their environment is that there’s just numerous or innumerable channels of interaction and sensitivity.

[01:16:24] Paul: So then, first of all, what do four e scientists, apologists, what do they think about this claim that essentially, that the embodied perspective is just that it’s a perspective that almost can’t be approached by science?

[01:16:41] Mazviita: Yeah, I mean, the book isn’t out yet, so I’m waiting to hear what responses it elicited.

Yeah, no, I’ve discussed it, actually, a bit with my colleague Dave Ward here. So we ran a reading group on the manuscript, and he’s a philosopher. He works on Fouri, cognitive science. So he told me he’s writing a paper, which actually pushes back to some. Oh, okay, cool, what I said. So I’m waiting to read that. And hopefully others will chime in. I mean, this is the thing in philosophy. When you set out an argument like that, well, you want to hear what other people say, because I’m sure there are responses that can be made to that. I think the key thing will be to look at examples of work in science which has actually managed to mediate this tension. I mentioned Gibson before as someone that was influential on me and the people that mentored me in vision science. I don’t think his whole research career was a waste of time. So I think there is an inherent tension there. And I do think that philosophy probably is the better arena for working out these four insights, but think that it means that mainstream science should just completely ignore four e principles. I mean, what I say in the book about Gibson is that he was an important figure, but also to some extent, people looked at him and said, well, that’s not really how we should be doing things.

And that ecological psychology has sort of struggled to attain the prestige of other branches of psychology. And I’d say, well, maybe that’s why. Maybe it’s because they were trying to mediate this path where there is just an inherent tension or an inherent compromise that can’t easily be struck.

[01:18:39] Paul: So this is a theme I regularly felt when reading your book. And there’s a tension in myself. I’m always wanting to think.

I said this about process philosophy, and I think about it about ecological psychology. How can this inform what I’m doing as a neuroscientist?

[01:19:00] Mazviita: Right?

[01:19:00] Paul: So some sort of like, what is it? Like, applicable philosophy or something for neuroscience?

And then reading your book, I recurringly had this feeling of sort of a letting go and saying, okay, well, it can’t really inform what I’m doing, but what it can do is serve as a sort of checks and balances in my own mind about what I’m doing.

But then there’s a tension between that and what I want to be doing is figuring these things out. That you’re constantly reminding me that using the tools that I have as a scientist, I can’t approach, essentially. So I’m trying to understand what good is it to be able this letting go and accepting that there are these, and we’ll get into mind maybe eventually, if we have time accepting that there are these things that aren’t touchable with the tools that I’m using these days.

[01:20:01] Mazviita: That’s interesting to reflect on that, because I wrote this book as someone no longer doing neuroscientific research.

[01:20:08] Paul: And you don’t care anymore. You don’t care how satisfied, you don’t need? Well, no, I care. I think you phrase it as you’re not bound to the same epistemic aims or something like that.

[01:20:19] Mazviita: Yeah, exactly.

[01:20:21] Paul: Another way to say that is that you don’t care anymore.

[01:20:24] Mazviita: No, but I’m not in that situation of, philosophically speaking, having a view, say a four e sympathetic view or a processual view. And then also trying to play that out in a research environment in neuroscience. So that’s the tension I’m not feeling. Yeah, I mean, I had more in mind as the stereotype neuroscientist here, someone who is just a hardcore computationalist and just thinks, aha, that’s the philosophy. And that is supported by the results that I’m getting. And saying, no, the philosophy that you have is a convenient, like, simplifying strategy. You shouldn’t let the success of this modeling strategy, the impressiveness of these results, convince you that that philosophical position is wrong, that actually we should hold on to those insights in the four e tradition which are pointing to the criticisms with that hardcore computational.

[01:21:29] Paul: So it’s really just the interpretation of what you’re doing, instead of the.

You’re saying, sorry, you’re agreeing with me, that it’s more about the interpretation of your scientific results. What I’m wondering is what the danger is to me, because I accept your arguments, essentially, in this book. And in some sense, I know I feel like a leaf blowing in the wind so frequently when I read things like this, because I’m like, oh, she’s right. She’s right. Oh, yeah, that’s right, however I feel about it. But then I thought, is there, like, what are the repercussions for me moving forward if I accept all these views? It’s just in my own interpretation of what I’m doing, isn’t it?

[01:22:11] Mazviita: Yeah, I suppose it is. I find this a really interesting question to discuss with it. I mean, it’s nice to hear that you found it compelling.

[01:22:21] Paul: It is compelling, but part of the reason is because you’re constantly reminding the reader, it’s okay, it’s fine. You can let go. And it’s fine because you have no metaphysical claim on things either. Right? So you’re metaphysically neutral. Go ahead. Sorry.

[01:22:38] Mazviita: Yeah, I mean, I don’t. I don’t see the danger. I’m not saying you should give up your day job and not do this kind of research.

I’m saying that if your long term goal as a neuroscientist is to discover the essential features of the brain systems that lead to certain cognitive capacities, then this isn’t the right way to go about this. So, yeah, it’s saying that you have to be more modest about what you think the neuroscience will lead you to.

For people like me, who went into neuroscience precisely because they wanted to learn about things that were philosophically deep, I’m saying you might be dissatisfied. And so, for neuroscientists who are sort of actively still trying to tap that. Yeah, maybe that is bad news.

[01:23:40] Paul: The other bad news for someone like me, right, is that one of the conclusions that you make in the book is that there is a fundamental limit on what science can say about even brains, not even mind. Right. So studying something as complex as the brain in itself. So it’s a disheartening conclusion, but at the same time, you also say, but it’s okay, it’s okay, you can keep doing what you’re doing.

[01:24:05] Mazviita: Yeah.

One of the ideas there is that there’s limits to what can be discovered through this third person methodology, like taking the brain as this external object and sort of employing the methods of mathematication and so forth. But it doesn’t mean that there aren’t other methodologies that, given that limitation, one might want to see as complementary to it. I mean, if you think of our first person experience of our own mental lives, our second person experience of interacting with other people, it just seems right to say these also give us insights into what it means to be an embodied person with a brain. It’s just that it hasn’t had the prestige of science. And I think one of the targets of my book is scientism, this notion that science by itself will have all the answers to the questions that are important to us. And by saying, well, there are limitations to what it can tell us about how the brain gives rise to mental life, what it means to be a cognizing person, employing their brain, then that opens the door to other approaches that you should see as complementary to that and not think that they don’t have a place, because in confrontation to science, they’re not using the same methods.

[01:25:27] Paul: But those same limits don’t apply to control as an aim of science. Right. So just on the understanding part.

[01:25:36] Mazviita: Yeah, exactly. So my point about control is that we should realize that it occurs by making the thing that you are aiming to control very often. Right. So you can, I don’t know, create certain cyborg systems, et cetera, et cetera, that can instantiate behaviors that you might want them to through tweaking the brain. I think if you take control to be translation across the board, then that suggests, like, pessimism about that. So if your aims are control of all of the wild type unconstrained phenomena.

Yeah. My message there isn’t so positive, though you might. I mean, technology is always unpredictable. People just serendipitously discover hacks for this and that. But I think it’s somewhat pessimistic about systematic discovery of all of the things that one might want to learn to control through lab experiments, then translated beyond the lab.

[01:26:49] Paul: I wonder what you think about brain computer interfaces. Right. So this is an interesting case where you can put some electrodes in someone’s brain, read out a very, very small proportion of their neuron firing activity, run it through a quite simple, it turns out, mathematical transformation, like a linear decoder, and still get the person being able to, or animal being able to control, like a cursor on a screen without moving a hand or something like that. So that level of control is actually quite.

My point is that it’s interesting that you can use such relatively dumb, simple things for a complex system. Although my question is, like, what you think about that? I mean, I wasn’t going to ask you this. I didn’t prepare to ask you this, but because, in a sense, then you are creating a technology in a way you’re understanding. What are you understanding then? What is the technology that you’re understanding then? If you’re controlling a very. It’s a reduced.

[01:27:56] Mazviita: Yeah, yeah, no, I think that’s a really interesting example. And so when I was in Pittsburgh, yeah. I got to know some of the work going on in the Schwartz lab.

[01:28:05] Paul: Andy Schwartz.

[01:28:05] Mazviita: Yeah.

And they were using these quite simple linear models to decode motor cortex, because I noticed the parallel with the modeling that I was doing of primary visual cortex and this thing. How can such simple, basically linear models actually get you anywhere with this?

What was interesting about the BCIS is that there’s a certain amount of learning that goes on.

[01:28:34] Paul: Your brain is adapting, gets the feedback.

[01:28:38] Mazviita: Over what signals needs to be sort of tweaked so that it will get.

[01:28:42] Paul: The cursor movement that it wants you plug in. And it’s not like I can move a robot arm where I want. It takes training. Yeah, you spend a pretty good deal of time in the book talking about using these artificial neural networks, and this has been to model brain activity, to understand brains, and this has been the modern trend. It’s a hot topic in neuroscience and machine learning. It’s what this podcast is a lot or used to be about anyway. I don’t know if it is anymore so much.

In fact, I’m going to moderate a panel in a couple of days about neuroa, this misconvergence or match between neuroscience and AI.

And you say, yes, these are complex models. They’re also way simpler and abstracted than the things that they’re modeling. And therefore, they’re subject to the exact same limitations.

[01:29:35] Mazviita: Essentially, yeah, that’s right. So, one thing about artificial neural networks and the brain inspired confluence that we’re talking about here is that I think there were hopes for it being the technology that would lead to theoretical insights in the brain. So, as we talked about before, like having the invention that then reveals some general principles of cognition, I think people been hoping.

[01:29:59] Paul: Those hopes are still high. Those hopes are still.

[01:30:01] Mazviita: Yeah, it may be not in my.

[01:30:03] Paul: Mind anymore, but you’ve let go.

[01:30:09] Mazviita: But what I’m sort of pointing out here is the theoretical assumption behind these, which everyone’s well aware of, that it’s only this very abstracted level of neural wiring and connection strengths that are assumed to be the cognitively relevant properties of the brain. There’s so much evidence that there’s many layers of biological complexity that are important for biological cognition. So even though we can have superficial similarities with what an Ann does and what biological brain does, I think we’re not going to get much beyond the superficial similarities, because I think the assumption that cognition just is taking place at that Marian, very top level, independent of implementational details. I just don’t think that’s correct anymore, for reasons that I give in the book and elsewhere.

I mean, one of the things that has got me interested in the basal cognition field is precisely the reasons for thinking that the principles of information processing within biology are very pervasive in cellular life. So things like electrical signaling, you see an ontogeny, and development, all kinds of things to do with just basic biological maintenance, that it’s turned out to be fruitful to characterize in certain terms that are cognition like.

It makes it then seem implausible to me that in the brain, all of those inherently biological processes are not also somehow relevant to cognition, which is what’s assumed in that an approach, that the hardware, the implementation, is irrelevant.

[01:32:11] Paul: Yeah, and you talk about this in terms of multiple realizability, degeneracy.

And I’ve been thinking along these lines for some time now, as well.

And you have influenced me over the years in this respect, as of others thinking about the importance of life processes in these cognitive terms. But there’s still attention in that you can get a long way with just this computational, super abstracted approach.

So then what else left is there? And there’s that space of, well, biological life processes are important for cognition. This also depends on how we define cognition, because I think a modern computationalist approach would just define cognition as the functions, the output, or something just choosing the right thing. And so there’s a certain amount of redefining that may be going on here as well, but this is one reason. Well, let me step back because. All right, so we’ve talked about artificial networks, and what you don’t write about, I think, in the book are the idea of, like, the blue brain project and these massive simulations, right, where you’re essentially trying to emulate as many biological details as you can in a system. And so I’m wondering what you think about that. I think what you’re going to say is something that you did write about in your book, and that’s if you simulate or emulate or model all of the biological details, you no longer have a model. You have a neuron.

[01:33:43] Mazviita: Yeah, exactly. Yeah. I mean, every model has to be an abstraction. A model without an abstraction is not a model anymore. It’s the thing that you were originally started out with. So I think, yeah, I’m not saying that super biorealistic models are inherently better. I think the notion of the inherently best model is a flawed notion because different models are good for different purposes. So very simplified models can be great depending on your purposes. What I’m sort of backing away from and encouraging people not to subscribe to is this notion that a simplified model can give you all of the essential characteristics of the target. Because if you think that, you’re forgetting that it’s a simplification.

Yeah. So for many projects, and you’ve talked about prediction and control, simplified models can be superior because they might just hone in on a couple of variables that happen to be relevant to the predictions that you want to make.

So I think there’s always balances and compromises and trade offs between the level of detail that’s workable and relevant and useful to your project and all of that. I mean, if you take the Blue Brain project and the Human Brain project, from what I understand, I met a researcher in science technology studies, Tara Mufford, who actually did some field research with the human brain project. And she talked about how, for the experimentalists that were collaborating on this project, these models were much too simplified.

They weren’t biorealistic enough, because they were having to make compromises in the models, like assuming identity across different organisms and across different labs, that if you knew the details, you’d see that actually this is very much an idealized representation.

And so then you have to ask, for the purposes, for the scientific questions that they were posing themselves. Was this the right compromise? Other people in the computational neuroscience community said, no, this is needlessly detailed for the questions that they want to pose.

[01:35:55] Paul: But if we did. And this is just a thought experiment. If we did recreate, let’s say, a simulation or a robotic full brain in some animat that’s moving in the environment, right, and could pass the embodied Turing test.

My question is, where is the line between. Because I agree that there’s something special about the biological processes, but where is the line if we can recreate it down to some almost limit, right? Almost a limit of some asymptote where we’re really, really close to the real thing?

Is the abstraction just that one last .1 to get to the real thing? Is that where the special difference is, or do we just call that thing that’s close enough? Do we call it life at that point, and then we’re satisfied? And of course, then we wouldn’t be able to understand, according to your analysis. But is there something special about life itself?

Or would we call that life?

[01:37:02] Mazviita: Where do you sit in your thought experiment? Is this like a biomimetic system that does everything that an organism does, like feeding itself?

[01:37:14] Paul: Sure.

[01:37:14] Mazviita: Has motivations, goals.

[01:37:16] Paul: It’s embedded. I’m just trying to get as close as possible to make you uncomfortable, okay?

To not let you back out of a corner.

[01:37:23] Mazviita: Is it constantly fighting against the thermodynamic entropy, all of that in this sort of life mind continuum field? It’s these sort of basic biological needs of having to work against entropy, having to take energy from your environment, having to maintain that environment body boundary, that are taken to be sort of very basic to what it is to also be a cognizing thing. So I think, in principle, if an artifact is having those same challenges and having to do the same things, I don’t see why we wouldn’t think it is as maximally close to evolved biology.

But if it is something like a current robot that isn’t having to do all of those self maintenance things, but is, in a superficial way, copying some cognitive characteristics, that’s why I think we should hesitate to say that it’s really cognizing, as opposed to giving, like, a clever mimicry of cognizing.

[01:38:32] Paul: Okay, I won’t push you too hard on that, especially because of time constraints. But I do want to just mention briefly, and this segues from your focus and appreciation of biological processes and being far away from thermodynamic equilibrium as a fundamental aspect of cognitive organisms. You go on to conclude. So you talk about the mind and consciousness for, I don’t know if it’s a whole chapter, but you go on to conclude that fundamentally, something like consciousness will forever be in the realm for philosophers and not scientists. Am I stating that correctly?

[01:39:17] Mazviita: Yeah. So I’d say it’s outside the parameters of the computationalist tradition.

[01:39:25] Paul: Okay.

[01:39:25] Mazviita: Because what computationalists do is draw in this analogy between computing systems, which I think no one has any reason to think are conscious, and biological systems, which can be conscious and explain things in terms of what’s similar between those two. But if your lens, through which you’re trying to theorize biological cognition is always through those similarities with a non conscious machine, that’s why I think consciousness will resist that tradition or fall outside what’s explicable in that tradition.

[01:39:59] Paul: Yeah. These days, so much of neuroscience is computational neuroscience that it’s almost by default when I say science, I mean computational neuroscience. Right. But the reason why I brought up consciousness here is because you go from this like, you have the perspective that these biological properties are necessary. You make an argument and conclude that it’s biology, that only biology can be conscious. And I’ll leave it to listeners to read that in the book, because it’s sort of a grand conclusion in some sense.

[01:40:35] Mazviita: Yeah.

Well, with the caveat that if you have a very biomimatic artifact, that it could be so. But no, the point is that to understand what cognition, and I think consciousness is probably more central to cognition than people have tended to assume on the basis of that computationalist framework, which thinks of consciousness as this epiphenomenon, and the real work is all of the computational processes. But no, once you appreciate that there is the centrality of just biological life maintenance to what we mean, of what it is to be a cognizing thing, then it just looks less and less plausible that a fully nonliving thing could have all of that rich cognitive life. That is what it is to have consciousness.

[01:41:43] Paul: Okay, well, I will conclude by thanking you again for the book. I love the book, and congratulations on it.

The book will be out when this airs.

Am I okay to say it’s, like, free digitally, and if people want a physical copy, then they can also get that.

[01:42:02] Mazviita: Yeah.

Thank you so much for reading it and kind words that you said about it.

[01:42:09] Paul: I’m going to be revisiting it. And, of course, because of people like you, I’ve been influenced to be interested in things like basal cognition. So I’m looking forward to that work coming out as well. So thanks, Masita.

[01:42:21] Mazviita: Thanks very much. Really nice talking to you. Bye.