Brain Inspired
Brain Inspired
BI 158 Paul Rosenbloom: Cognitive Architectures
Loading
/

Check out my free video series about what’s missing in AI and Neuroscience

Support the show to get full episodes and join the Discord community.

Paul Rosenbloom is Professor Emeritus of Computer Science at the University of Southern California. In the early 1980s, Paul , along with John Laird and the early AI pioneer Alan Newell, developed one the earliest and best know cognitive architectures called SOAR. A cognitive architecture, as Paul defines it, is a model of the fixed structures and processes underlying minds, and in Paul’s case the human mind. And SOAR was aimed at generating general intelligence. He doesn’t work on SOAR any more, although SOAR is still alive and well in the hands of his old partner John Laird. He did go on to develop another cognitive architecture, called Sigma, and in the intervening years between those projects, among other things Paul stepped back and explored how our various scientific domains are related, and how computing itself should be considered a great scientific domain. That’s in his book On Computing: The Fourth Great Scientific Domain.

He also helped develop the Common Model of Cognition, which isn’t a cognitive architecture itself, but instead a theoretical model meant to generate consensus regarding the minimal components for a human-like mind. The idea is roughly to create a shared language and framework among cognitive architecture researchers, so the field can , so that whatever cognitive architecture you work on, you have a basis to compare it to, and can communicate effectively among your peers.

All of what I just said, and much of what we discuss, can be found in Paul’s memoir, In Search of Insight: My Life as an Architectural Explorer.

0:00 – Intro
3:26 – A career of exploration
7:00 – Alan Newell
14:47 – Relational model and dichotomic maps
24:22 – Cognitive architectures
28:31 – SOAR cognitive architecture
41:14 – Sigma cognitive architecture
43:58 – SOAR vs. Sigma
53:06 – Cognitive architecture community
55:31 – Common model of cognition
1:11:13 – What’s missing from the common model
1:17:48 – Brains vs. cognitive architectures
1:21:22 – Mapping the common model onto the brain
1:24:50 – Deep learning
1:30:23 – AGI

Transcript

Rosenbloom    00:00:03    It is unfortunately always been somewhat of a French activity, both in psychology or cognitive science and ai. Although it, it captures kind of the original motivation for AI as it was started back in the, in the 1950s. I wanna look for the deep underlying generalities and see how to get this wide range of phenomena outta the combinations of a small number of ideas. I mean, if you look over the history of a, of ai, or probably in science in general, they’re always booms. Uh, when something interesting happens, and they always ask them toed out, but you can never tell when or why or how high  

Speaker 2    00:00:51    This is. Brain inspired.  

Middlebrooks    00:00:53    In the early 1980s, Paul Rosenblum, along with John Laird and the early AI pioneer, Alan Newell, developed one of the earliest and best known cognitive architectures called soar, a cognitive architecture. Uh, as Paul defines it is a model of the fixed structures and processes underlying minds. And SOAR was aimed at generating general intelligence. It’s been 40 years since then, and Paul is now Professor emeritus of Computer Science at the University of Southern California, and he doesn’t work on SOAR anymore, although SOAR is still alive and well in the hands of his old partner, John Laird, he did go on to develop another cognitive architecture called Sigma. And in the intervening years, uh, between those projects, among other things, Paul stepped back and explored how our various scientific, uh, domains are related and how computing itself should be considered a great scientific domain. And that’s in his book on computing the fourth great scientific domain.  

Middlebrooks    00:01:57    He also has helped develop the common model of cognition, which is not a cognitive architecture, but instead a, a theoretical model meant to bring together what everyone in the field agrees are the minimal components for a human like mind. The idea, uh, is roughly to create a consensus within the cognitive architecture community for a shared language, uh, and framework so that whatever cognitive architecture you work on, you have a basis to compare it to and can communicate effectively among your peers. So all of what I just said, and much of what we discuss in this episode can be found in Paul’s memoir called From Designing Minds to Mapping Disciplines, my Life as an Architectural Explorer, and that can be found online. And I link to it in the show notes at brand inspired.co/podcast/ 1 58. And I will say the memoir is an excellent read, um, to learn a lot of science and history, but also to learn about a, uh, very self-aware and self-reflective scientist. And that would be Paul Rosenblum. Remember, kids Only you can prevent forest fires and only you can keep this podcast alive by supporting it on Patreon, where you’ll get full episodes and you can join our brand inspired community. Go to brand inspired.co to learn more about that. And thank you to my Patreon supporters. All right. Here’s Paul.  

Middlebrooks    00:03:26    I thought we might, uh, start with a quote from your, is it a living memoir? Is that how you refer to it?  

Rosenbloom    00:03:33    Yes, it is living in the sense that, um, I’m, I’m constantly updating as I think of more things, it’s actually been submitted to a journal, uh, which case, if it gets published, then it will, uh, then it’ll get frozen at that point. But so far I keep updating it,  

Middlebrooks    00:03:48    Uh, the memoir unless B, because on the one hand I see, um, the title from Designing Minds to Mapping Disciplines, my Life as an Architectural Explorer. But then, uh, like, seems like an alternative title might be, uh, oh, I forget something. Oh, yeah, the Search For Insight. Which one are we going with?  

Rosenbloom    00:04:09    <laugh>? I actually started with that. <laugh>. Oh, you did? But, uh, decided that was a bit too generic. And, uh, that’d be a little more explicit about the, the nature of, of what I’ve been doing in terms of the kind of topics I’ve worked on and, and my research methodology.  

Middlebrooks    00:04:26    Well, I, I really enjoyed, uh, the memoir. It reads very easily and, um, I guess as a memoir does, uh, it explores both, you know, your, your scientific career, but also some very personal reflections and, you know, personality, uh, issue, you know, parts of your personality that, um, have shaped your career and your thought process. So I, I highly recommend it to people. Let me, let me, uh, start with this quote here. One of the things I’ve realized about myself over the span of my career is that I, this is that I am attracted much more to thought provoking novelty than to rigorous methodology. Perhaps this makes me inherently pre-scientific or non-scientific, but I get little thrill from either formalization or precision. So that’s the quote. And, uh, you know, you characterize yourself a as is in the title as an explorer, and that you, that’s essentially what is what you have been doing throughout your career. And you also state in the book, um, how throughout your career, you, you’ve kind of been slightly outside the mainstream areas of like, you’re slightly outside of artificial intelligence. You’re slightly outside of, uh, cognitive science, and that you’ve always been, been more or less part of a smaller, uh, research community. So I, I wanted to ask you about that. Uh, first, I mean, do you think all these things are related, the exploration aspect and, uh, your inherent pre-scientific or non-scientific <laugh> approach?  

Rosenbloom    00:05:54    Um, well, that approach fits fairly well with the area in which I have kind of been driven to work, which is to look at architectures of mind of what we call cognitive architectures. Um, it is unfortunately always been somewhat of a French activity, both in psychology or cognitive science and ai. Although it, it captures kind of the original motivation for AI as it was started back in the, in the 1950s. Uh, but most people quickly backed off to studying parts of mines where they could study them carefully and where they could measure things and make progress in knowable ways. Um, but I’ve always cared about the, the big problem. And to me, that’s been more of an exploration activity than it has been kind of careful experimental activity. But we certainly do experiments at times when you’re looking at various parts of it, and when you’re looking at parts of it fits together and so on. So I think both kind of the way I think, and the, and the problem I chose to work on fit together quite naturally.  

Middlebrooks    00:07:01    One of the people that, uh, you were, uh, influenced by and worked with and under is Alan Newell. And, um, you know, he, he wrote the book Unified Theories of Cognition. You know, he was at the beginning, like you just mentioned, of artificial intelligence. Uh, but then he, you know, went on to get interested in these kinds of cognitive architecture, uh, problems. And, uh, I, I wanted, I wanted to ask you, you know, just how, I mean, you write about ’em in the book, uh, and so people can always, uh, read the memoir, or I guess it’s a journal article. I thought it would be a book.  

Rosenbloom    00:07:31    Oh, right now? Well, it’s 80 pages a little short for a book, but Yeah. Um, well, we’ll see how it ends up <laugh>. Anyway, my philosophies was to write what you wanna write and then figure out how to publish it  

Middlebrooks    00:07:42    Afterwards, <laugh>. That’s right, that’s right. And actually at the end of the, uh, book, I think that’s one of your, is that one of your 28 maxims? It is. I’d have to, it is, yeah. Okay. Yeah, so that was really cool too. Um, that’s a lot of maxims, by the way. <laugh> and Alan Newell had three maxims. Right. So, so  

Rosenbloom    00:07:59    <laugh> and there might have been more, but those were the ones I captured  

Middlebrooks    00:08:01    <laugh>. Oh, oh, right. Okay. Well, his weren’t formalized. You kind of captured those in your own words, right? Yeah. Yeah. So, so what kind of influence did Aan Nuell have on you? I mean, professionally, but also personally?  

Rosenbloom    00:08:15    Um, so AAN was kind of an amazing guy. Uh, so he was currently one of the, the founders of both AI and cognitive science. Uh, but he was also one of the founding figures in, uh, computer architecture and in human computer interaction. Uh, he had major books that helped found both of those fields. Um, I first got to know him at Biff when I was visiting, uh, various universities to try to figure out where to go for graduate school. And so Ive visited Carnegie Mellon. I had a chance to meet with him, and I was just kind of impressed with his whole vision and enthusiasm for, for the path. Uh, and that’s one of the main reasons I ended up going to, to C M U and, uh, trying to work with him. Uh, but in ai, he had this, he had this grand vision of how to build, how to go about building minds.  

Rosenbloom    00:09:08    Um, and it started with a number of, uh, principles that he was heavily involved in, in kind of defining, one of them was the notion of physical symbol systems. Mm-hmm. Uh, whereas symbol is a pattern that can be combined into expression. So you get this combinatory nature, they can designate or represent things and they can be interpreted. So you can use them as basis for programs and things like that. So symbol systems are in fact, one way of viewing general purpose computing. Computers are in fact, in general, simple systems of a particular sort. So that’s one of the kinds of ideas there is the notion of production or rule-based systems, which are these local, um, reactive bits of memory, uh, which you can add incrementally, uh, and that can execute when the situation is appropriate. And so that was kind of a model for memory. Uh, then there’s notion of problem spaces or searches as a way of structuring both kind of decision making and how one thinks about the consequences of decisions. So projects into the future and looks at alternatives. Um, and then there was ultimately this notion of unified theories of cognition or cognitive architecture as a way of putting these together. And the earliest kind of major architecture I worked on was, so I’d worked on one on my own little bit earlier than that, that I mentioned in the memoir.  

Middlebrooks    00:10:33    Zaps, XAP Zaps,  

Rosenbloom    00:10:35    Uh, zaps, yeah. X A Ps, um, which I got started when I was a visiting grad student in psychology at, at uc, San Diego. That’s a whole other story. But in the early years of soar, we, we essentially found ways of putting these ideas of symbol systems and problem spaces and rule systems and kind of learning, um, by practice, uh, mechanism that I had developed to my thesis into an architecture that could combine those in all sorts of interesting ways and produce a range of phenomenon people hadn’t been able to do before within a single system. So that was the kind of thing that’s, that that excites me about going after, uh, these kinds of architectures. How can you find a small set of very general mechanisms and when you put them together, can produce kind of the range of human-like intelligent behavior.  

Middlebrooks    00:11:27    And, and before we move on, cuz we’re gonna talk about, uh, more about Soar and then later, uh, your cognitive architecture Sigma. Uh, but, uh, you know, what, how did Alan affect you personally? Was, was he kind of a, was he a kind of a playful personality? It seems like?  

Rosenbloom    00:11:44    Um, I, I think you’d say he was, he was not an overly serious person, though. He was totally committed to his work. He worked incredible hours. Um, mm-hmm. I forget, he worked like 60, 70, 80 hours a week on the research. And then he’d spend another 30 to four, eight hours working on the communities around him. So would be both the, the research community and the community at Carnegie Mellon. So he was kind of totally committed, uh, to the stuff he was doing, but it was always cheerful. Uh, he was a happy person. Uh, to me, he was kind of an ideal of, of a researcher in terms of how to work with people, how to work with your communities, how to do your research, how to set large problems and be committed to them over the long term. Uh, how to support everyone around him. Um, so to me, he was kind of an inspiration and ideal kind of second father figure. Mm-hmm. <affirmative>, uh, he got, he clearly got me started on kinda the research path that influenced most of my career. Uh, so he was a, he was a huge influence on my life. And a and first, um, kind of an advisor and mentor, and then a, um, a collaborator and friend over, over a number of years afterwards when we continued working on the So project after both John Laird and I had graduated.  

Middlebrooks    00:12:58    Was he an explorer like yourself, or would you consider him more of a, uh, well, what, I’ll let you describe it, <laugh>,  

Rosenbloom    00:13:07    I think so I don’t know if he would describe himself that way. He was clearly an innovator. He invented a whole bunch of things that have influenced, uh, many scientific fields. Mm-hmm. Um, he believed certainly in the scientific method, and we certainly did experiments <laugh>, um, but  

Middlebrooks    00:13:26    Much do much to your chagrin.  

Rosenbloom    00:13:28    <laugh>? No, not really. <laugh>. I mean, I was happy to do them. I just realized they didn’t give me the joy they certain other things did. Hmm. Um, but much of the early phases of ai, cognitive science were exploratory. And even if we didn’t say that explicitly, they were, the fields were wide open and there were so many important problems to tackle that there often wasn’t time to tie down every loose end in something before you went onto something else. So you’re often thinking, trying to think big and thinking about problems no one had thought about before. So it was kind of inherently exploratory. And I, I picked up that as part of the culture, even though by the time I got to the field in mid seventies, I guess the field had been round for 20 years or so, but ai, cognitive science was still not, not quite invented. But, um, but there still was this notion of there’s this huge world of intelligence to go out there and discover and what can we find out about it? What can we formalize and the, to at least to the extent of being able to create programs that behave that way. Uh, so the kind of formulas and we tend to think about was a procedural notion, not creating theorems and proofs, but creating programs that would actually embody the ideas that, that we were considering.  

Middlebrooks    00:14:47    Hmm. So you, you had a, uh, I think it was about a 10 year hiatus where you had been working on Soar, one of the cognitive architectures that we’ll discuss here in a second. And then later you came back to, uh, begin and work on what, uh, became known as Sigma, which, uh, was, is your newest and ongoing cognitive architecture. Um, but during your 10 year hiatus, you kind of went even grander picture, uh, and almost meta science and looked at theories of computing and how computing relates to other disciplines. Uh, and then you came up with something called diatomic, um, um, mo diatomic maps, is that right? I,  

Rosenbloom    00:15:27    I think I’ve missed dichotomy, but, uh, I’m  

Middlebrooks    00:15:30    Sure di dichotomy. Oh man.  

Rosenbloom    00:15:31    Excuse.  

Middlebrooks    00:15:32    Let me say that over. Uh, yeah. Di Dico dichotomy. Huh? Maps. All right. So maybe we’ll come back to those. But, but the interesting thing to me is that cognitive architecture in your little, um, it’s not a vinn diagram. It’s kind of a nested diagram is like the most, uh, minute part of, uh, the way that you view, um, what you have worked on through your career. And yet, you know, this is a, a kind of a unified picture of cognition. So you start from a pretty large scale there,  

Rosenbloom    00:16:03    <laugh>. Um, yeah. So when I, I started that tenure digression, it, I certainly didn’t have any attempt of trying to re-understand computer science or computing more broadly. Um, but I had gotten burned out. Um, I, I didn’t see how to make further progress on SOAR in the way I wanted to. And we had been working for a number of years on a military simulation, uh, basically applying SOAR to model pilots and commanders in, in military simulation, uh, which was highly successful. But it took me away from the kind of research that I liked to, to do. And, and one of the things I found out about myself is that I’m really not applications oriented. Hmm. Even though payoff for that is, is very high. I’ve never started a company, I’ve always avoided doing anything beyond a toy application list. I kind of had to, um, but we as a group decided that was the right thing to do with soar, and I think it was the right thing to do with soar.  

Rosenbloom    00:17:01    Hmm. Um, but after that, uh, herb Shore, who was the executive director of issi, um, so was burnout, invited me to work with him on new directions for the institute. This, this, this is the University of Southern California Information Sciences Institute, um, which essentially involved going across the breadth of computer science and related fields and looking at what the future was to bring and what kind of new areas we ought to get, uh, the institute into. So that had me working in many different areas, everything from technology in the arts to um, um, sensor networks to, um, automated building construction. And there was just this wide range of things I was working on and with many different partners. And to me it felt incoherent in the sense that I couldn’t figure out, at least to me, it didn’t feel like what brought all this together?  

Rosenbloom    00:17:57    Why, why was this so coherent subject of study? And that got me thinking a little bit during those 10 years as to what did they all have in common and how did they fit together. It was actually near the end of those 10 years and going on that, I started to come up with this notion of, um, at that point, it wasn’t a, a dichotomy analysis, it was more what I call the relational model, where I started to look at the relationships between computing and the physical sciences and the life sciences and the social sciences and to itself, and started to understand that there was a core to computing, which had to do with this transformation of information. But that when you related that core to itself in various ways and to these other scientific and engineering disciplines, you could start to understand how the different, how the different kinds of activities I was involved in and ultimately how all of computing fit together in kind of a neat way.  

Rosenbloom    00:18:53    And that’s sort of what led to, to the book, um, on, on computing the fourth great scientific domain. It happened later on as part of that, it was something inspired by, uh, Peter Denning, a computer scientist up at the, uh, Naval Post postgraduate called Monterey. But it’s come to the notion that, okay, I was looking at the relationship between computing and these other domains, and I was thinking them as great scientific domains. So they didn’t really have a name that there was standard use every knew the, the physical life and social sciences, but there wasn’t a term people used for those kinds of things. But I was using computing in the relational model just as I was using those. And I said, Hmm, bras computing is one of those. And so that set me up thinking, okay, what is a great scientific domain and is computing one of those?  

Rosenbloom    00:19:46    And that then took me into philosophy of science. Mm-hmm. <affirmative>. And it’s an area in which I’ve continued to work, though I can’t ever quite claim myself as being a professional in that area. But the question of what is a scientific domain, um, and what is and what isn’t and, and why consider computing? And so I came up with a definition of one, um, and which of course fits computing cuz it had to <laugh>, uh, fit these other domains as well. And then I started to look at the relationships among these, and that’s what led to that particular book and has continued with, uh, work in, uh, uh, these dichotomy approaches to understanding, um, and kind of AI and cognitive science and, um, and complex systems within those fields. So in a sense, you mentioned this nesting of, of kinds of models I’ve worked on, the cognitive architecture’s, the big, big and the biggest piece of my life. It’s part of AI and cognitive science. Those are bigger than that. And computing is bigger than that. And all of science and engineering is bigger than that. So it’s the smallest part of the figure, though has been the biggest part of my career.  

Middlebrooks    00:21:01    So did the, uh, we’re, I know that we’re all over the place, but that’s okay. But did your, um, stint, you know, in philosophy of science and then thinking about these bigger picture things, did that have something to do with you reformulating and getting interested again into cognitive architectures?  

Rosenbloom    00:21:18    Not directly. So th those 10 years actually led to two efforts that I thought of as being completely distinct at the time. One of them was Sigma, this new cognitive architecture and this other at, was the attempt to understand computing via the relational model. Both came out of that period. The Sigma came out of, again, looking more broadly at AI and looking at all the advances at that time being made in this area called probabilistic graphical models. Mm-hmm. <affirmative> think we were missing something in the world, cognitive architectures, but not understanding this world of probabilistic reasoning, reasoning under uncertainty and the kinds of learning you can do and the kinds of reasoning you can do. So Sigma came out of that period by that. Um, although one of the things I understood while doing the book was that I really cared about these deep, fundamental understanding about things.  

Rosenbloom    00:22:14    And part of what in Into Sigma was understanding that I didn’t feel like I was making progress in cognitive architectures unless I was bringing that same insight to those. So the early years of SOAR had that, there was kind of one form of memory, one form of learning, one way of making decisions. Um, and there’s a notion of problem spaces for search and one way of doing reflection. And through that small number of fairly general ideas, you got this huge variety of intelligent behaviors coming out. Um, when you couldn’t push sore any farther, at least I couldn’t figure out how to push sore any farther in that same paradigm. That’s corner of kind of when I burned out, oh, now John Laird, who along with Alan Newel me, was working, uh, it was John’s original thesis, and then I joined with him and Alan Allen was the advisor of both of us.  

Rosenbloom    00:23:07    John continued working on Soar after I stopped, and he had in more memories and learning mechanisms and other mechanisms, which was something I hadn’t been willing to consider at the time. Um, partly because even without realizing it, I had this mental framework saying, you can’t do that <laugh>. Um, whereas John didn’t have that framework. And so very successfully has improved soar in many ways, uh, since that point. Um, well, when I started to think, and it took quite a while to figure out how to do this, how to bring the problemo graphical models and sort like architecture together, I realized this was sort of the second period, like the early days of Soar, that all of a sudden I could see how to bring, again, a small set of ideas together that would be even more prolific in terms of the range of things it would provide than the original SOAR did. So both the book and Sigma had this common notion of, I wanna look for the deep underlying generalities and see how to get this wide range of phenomena outta the combinations of a small number of ideas. Um, that, that ended up being common to both. And I didn’t, wouldn’t have realized that if I hadn’t worked on the book. And that really helped with,  

Middlebrooks    00:24:21    Oh, let’s back up and just state what a cognitive architecture is. Because I don’t think that I had ever seen a definition of a cognitive of what a cognitive architecture is, but I just kind of loosely held it in my mind that I kind of know what it is. But, but there’s a probably pretty, uh, pretty clear definition.  

Rosenbloom    00:24:40    Well, I don’t know if there’s a generally accepted definition, actually. Okay. I, but I keep in mind, I consider it a hypothesis concerning the fixed structures and, and processes that define a mind. And for me it’s a mind, whether it’s a natural mind or an artificial mind to most people, a cognitive architecture focused on natural, in fact, human minds. Mm-hmm. <affirmative>. So most of the field will tell you that. And then there’ll be AGI and AI architectures, which are focused on artificial systems. To me, it’s about a mind in general. In fact, that’s something that’s characterized my thinking all along. I haven’t been focused on either human or artificial. I’ve cared about mind in general. And human and AI are two kinds. And so I look at all the different kinds. I can haven’t looked in animal minds much. That’s  

Middlebrooks    00:25:27    Okay. That’s what I was gonna ask is, but, but it is a human level. Yeah.  

Rosenbloom    00:25:31    Human, human level, or I guess the term we use now is humanlike human like. Yeah. But level is, is, um, emphasizes a different aspect of it. So yeah, that’s what a cognitive architecture is, or for what are the basic mechanism processes? How do you fit them together so that you get the range of intelligent behavior coming out of that combination.  

Middlebrooks    00:25:51    It’s that phrase, uh, fixed structure. I, for some reason, that helped me solidify what a cognitive architecture is and, and be more at ease with it. Uh, so is that the part that you think maybe is not commonly accepted as the definition?  

Rosenbloom    00:26:07    Um, so for me that’s the, that’s, that’s the key part of it. Fixed structure in minds, um, to other people. Um, to most people in the field, fixed is important though. People will say, well, there’s development as well as learning. Right. And development might slowly change the mind, um, as opposed to evolution, which creates it fixed within an individual. Um, there’s another notion which is that, um, sorry, ANU, one of the things he did in the unified theories of cognition is come up with this notion of scale counts in cognition. Right? Yeah. And so he looked at different time scales, um, from micro milliseconds to seconds to hours, uh, days, weeks, months, and so on, and developed a set of bands timescales. Uh, so he had what we called the biological band that ended, I think that around a hundred milliseconds. And then a cognitive band that went from like a hundred milliseconds up to a few seconds.  

Rosenbloom    00:27:07    And then there was a rational and social band. I may not have me remembering. That’s close. Yeah. But, so for him, I think the cognitive architecture was at the base of the cognitive band. It’s kind of things that happened at around 15 to a hundred milliseconds. Um, there’s a notion that is also probably due to him, the others may have come up with it early as well, that there’s a basic cognitive cycle time that’s in the range for him. Originally it was around 70 milliseconds in something he called the model human processor, which actually came out of his work in human computer interaction. And for us, it’s turned into more of a 50 millisecond cycle time for cognitive architectures. So there’s a way to think of cognitive architectures as providing the mechanisms that happen at roughly the tens of milliseconds level. Below that you get into neural networks and other aspects of brain modeling above that you might get into rational things like logic and other kinds of things, uh, which take more time in human cognition. But the cognitive band is sort of the tens of milliseconds up to few seconds. So you can think of a cognitive architecture as the stuff that supports that particular band. Just a different way of thinking about it. Um, we tend to think of those as kind of the same cuz when we talk about the mind, we’re talking about the cognitive band as opposed to the brain, which is more the biological band. Mm-hmm.  

Middlebrooks    00:28:31    <affirmative>. But so Soar started off, um,  

Middlebrooks    00:28:36    Aimed at that cognitive band at, um, 70 or, you know, a 70 millisecond kind of time scale. What, uh, I just want you to talk about Soar a little bit. I I kind, you know, back in the, uh, at Dartmouth, right? And at the, uh, birth of ai, they were gonna solve AI within a summer. Um, what was the feeling of, uh, you know, the, you know, looking back, it’s almost like right now as the right time for cognitive architectures to be coming back, because AI has made so much progress. We’ve made a lot of progress in cognitive science and neuroscience. Um, but then, you know, looking back, it, it almost looks overly ambitious to try to bring this unified, um, architecture to explain, uh, and implement cognition. What was the feeling like back then when, when SOAR was being developed?  

Rosenbloom    00:29:25    So most of the field, most of the time considers it overly ambitious. Um, but those of us who are kind of committed to it are always looking at what’s the best we can do at this point in time. And the field goes up and down. Most of the time, the field spends looking at particular mechanisms, particular planning or learning or memory or reasoning mechanisms. And they do a lot of very careful, uh, development and understanding of all these specialized techniques. Um, and then there are times they get dissatisfied it and say, oh, it’s time. We gotta think about putting those together. So there’s a time in the eighties, I think it was maybe early nineties when the field felt that was important. Um, and so there was a high point for the field. Then when we started in the mid to late seventies, there wasn’t that much interest in it at that point in the field.  

Rosenbloom    00:30:17    There was, I guess you could say early in the fifties. Um, so where so kind of came from, I mean, so there had been some ideas that Alan and, uh, and Herp Simon and others had developed earlier. Um, by the mid seventies, there was a project at C M U called the Instructable Production System Project. And it was wildly ambitious. The notion is you should be able to take production systems and rule systems and build millions of rules into them capturing much of human expertise. But you can’t do that by hand. It has to be learned <laugh>. And so how do you do that? So what you do is you build a natural language and you talk to the system and it learns from that. So the instructional production system was this production system based on what, at that point was a highly efficient version of how you do rule match, which was the mo most expensive part of the systems.  

Rosenbloom    00:31:16    And there was this task environment, which was a model job shop, very simple for the day, but think of it as an early predecessor of, of game environments, um, back in the mid seventies. And the idea is that the system i p s would try to solve problems in this domain and you would give it advice and interact with it. It was doing it. And from that it would learn new rules. Okay. It was entirely too ambitious, in fact, as Alan Newell said, in his desire and diversion. So it was a classic failure. There was a prospective paper and a retrospective paper and nothing in between. Soar turned out to be kind of the revenge of I p s Oh. Um, in John Blair thesis, they understood John and he, John and Al understand stood how to bring together rule systems and problem spaces so you could get decision making and rules going together.  

Rosenbloom    00:32:16    And then in my thesis, I understood how to bring together rule systems and learning from experience. And then we were able to put all of that together in a version of soar that, that combined them. And all of a sudden it, it didn’t have language yet. We started working on that a little later, but all of a sudden it started to have the kinds of capabilities we wanted an I P S, but we had no clue how to do an I P S S. And so that was kind of how that occurred At that point in time, I think to Alan, the notion of the unified systems was always important. Mm-hmm. <affirmative> to me, it was always important, but the field went hot and cold on, on the idea. I do agree with you that I believe in, or at least I hope that we’re getting into an era when we’ll see interest in them.  

Rosenbloom    00:32:59    Again, we’re starting to see that out of the neural network community, the deep learning community, um, where they realize some of the limitations of just having a single neural network answer, for example, systems like Alpha Zero, which, um, are or alpha go before, which beat the human world champion to a bunch of other games that combines the neural network as a form of memory along with problems, space search of a particular sort. Um, and so, and there are other examples of what they call neural touring machines where people are trying to combine neural networks with other, other kinds of capabilities. We’ve been more looking, looking at it for a long time in ai. Uh, Jan Le Con has come out with some recent editorials and papers about his ideas. He doesn’t call it a cognitive architecture, but that’s really what it is.  

Middlebrooks    00:33:50    Um, why wouldn’t, why wouldn’t he call it a cognitive architecture? Architecture?  

Rosenbloom    00:33:55    I don’t wanna speculate, but, okay. Um, all right. I mean, so people like Jeff Hinton know exactly what cognitive architecture, sorry, he was on my thesis committee actually. Mm-hmm. And I knew him since my Ja visiting graduate student at, at uc, San Diego, uh, Han I’ve only met once, I think very briefly at a workshop in Zurich, but, um, I don’t think I read our papers. I think the neural network world new fields often try to create space for themselves Yeah. By separating themselves out from earlier ones. And they try not to understand what was going on there. Uh, AI did that when it got started. It separated out itself from cybernetic and double E and pattern recognition. It only became decades later that we started to bring them back together. So, but at some point we’ll bring them all back together  

Middlebrooks    00:34:48    And then you’ll, and then your legacy will be sealed <laugh>. Right. I mean, how  

Rosenbloom    00:34:53    Frust,  

Middlebrooks    00:34:53    How frustrating has that been?  

Rosenbloom    00:34:55    Say that was all, that was all wrong. So yeah, it was history, but it was all wrong. Here’s the, I mean, for example, if it’s not all differential, then it doesn’t count. Mm-hmm. <affirmative> and the kinds of stuff we do, I mean, Sigma has some neural network aspects that are differential, but not all of it is. And so tm it doesn’t count.  

Middlebrooks    00:35:14    I mean, the, it’s, it’s obvious that eventually s you know, like the specialized deep learning systems are going to need to be put together in an integrated cognitive architecture. I, is that, am I missing something, or is that obvious?  

Rosenbloom    00:35:29    That’s obvious to me. Okay. And I think it’s become more obvious to some of the players in neural network, but most of them still work on individual applications. Yeah. Um, maybe they will be inspired by the recent success of the generative models, which are showing just the huge ability to do all sorts of things. Yeah. Um, to me, they’re kind of, um, the latest, best approximation to the subconscious, um, which is one part of the full system. And they will understand that and say, oh, we’ve done such a good job on this, but if we add this, this, and this, we can get too full. I mean, they’re all talking about sanctions and consciousness now and so on. So they’ll start to realize, oh, if we have these other capabilities, maybe we’ll get there.  

Middlebrooks    00:36:16    Okay. All right. So let’s go back to, uh, soar, I suppose before we go down the rabbit hole there. Cause I wanna come back to, to the deep learning, um, eventually as well. So, so, uh, soar was a, uh, a solution that, um, was the sort of a, uh, oh, what did you call it? Not a, uh, revenge. It was not, you didn’t use the word revenge. It was, uh, that was Yeah. Revenge on, on the, on the, uh, uh, um, previous classical failure as Alan Nunu put it. Um, and then you worked on that for, uh, a long time with John Laird and  

Rosenbloom    00:36:46    Yeah. About 15 years, uh, with Alan Naali died.  

Middlebrooks    00:36:49    Yeah. You developed, um, uh, something called chunking, which you have left off in, in Sigma. Um, uh, and so I don’t, maybe, I don’t know if you wanna describe what chunking is, but anyway, um, you, you and both John Laird, um, worked on that, um, and Alan until, um, until he passed. Uh, what else? So what else is there to say? How, like, how far did you get with Soar that you got? So f so to the point that you thought you couldn’t make, uh, any more progress on it, was it just because the limitation of the single memory function, et cetera?  

Rosenbloom    00:37:22    So let me start with chunking since then she popped. Okay, sure. Um, so that came out of some work I did with, uh, with Lan Newell. Uh, after I returned from uc, San Diego, uh, he had signed up to write a paper, uh, for a, for a, um, a chapter in a book. And he wanted to do it on the power law of practice, um, which was kind of known implicitly, but hadn’t been made explicit as far as I know before that it’s a general phenomena which says that, um, if you graph the amount of time it takes you to form a task versus the number of times you’ve done it, that you get a power law curve.  

Rosenbloom    00:38:02    Now, power law in the simplest form is the time equals the number of trials to some power. And generally it’s a negative power, small power, so that you get a curve that goes like this. It’s, it’s like an exponential curve, but slower. Um, so we started by doing a bunch of work trying to establish that you found power lock curves everywhere. Uh, whenever you’d measure human performance, you got a power lock curve. And then we started to come up with a theory for what could produce power lock curves. And it, it was Alan’s, um, intuition to start with. The notion of chunking, which traditionally came out of psychology as a way of talking about short-term memory as being chunks of items. Seven plus or minus two was George Miller’s classical paper on the topic. The way we eventually developed it, and I, my thesis was as a notion of, um, combining experiences essentially.  

Rosenbloom    00:39:01    So the idea was if you’re going along fine and you have enough knowledge to do what you want to do, you’re just applying the rules you already have to help make decisions. If you run into what we’ll call an impasse, you would step back and reflect this is one form of what you might think of as part of consciousness, but you step back and think about the problems in your own behavior and you solve those problems. And from that, the idea is you can learn new rules, then in the future, you could just fire and continue on with how to have you think hard about it again. Mm-hmm. <affirmative>, that’s what chunking became essentially a means of learning new rules by, um, essentially summarizing this reflective processing you do did when you ran into Impass. And so it turned out that applied sort of all over the place and soar.  

Rosenbloom    00:39:53    There are a number of different kinds of impass it could run into. And so you could learn rulers that controlled search, you could learn rules that implemented operations of various sorts. You could learn all different kinds of rules. And that was, again, part of the early excitement. Um, as time went on, we, we found more and more things so could do, and we were doing wider and wider ranges of applications from knowledge intensive systems and search systems to real time robotic control and, and control of simulated entities. So all of that was going along fine. What I didn’t feel like with the architecture itself was improving, um, I didn’t know what was missing. So that was part of the frustration. I knew I couldn’t, I knew we weren’t there yet, but I didn’t know what was missing. Hmm. I did a, I had a sabbatical just before I stopped working on, so, and I tried to work on emotion.  

Rosenbloom    00:40:49    Um, and I realized it was very difficult in soar. Um, the only thing I came up with a bit was a bit of model, a bit of a model of frustration, which trying to be extremely <laugh> appropriate. Sure. <laugh>, it was essentially you’d, you’d start reflecting and you’d keep reflecting down to arbitrary levels. You never make any progress. And that was the notion you’d start getting frustrated. Oh, okay. <laugh>. Um, and then that was kind of, kind of when I stopped working on Soar. It wasn’t until then with Sigma where I started to bring together the probabilistic ideas and the kinds of learning you can get out of that. Um, and the kinds of reasoning you can get out of that and how that gave you ideas about how to do perception and motor control and how to do visual imagery and all these other kinds of things that I felt like I could make progress again. Now you mentioned the chunking’s not and sore, and that’s perhaps my biggest failure with Sigma to this point. Um, I’ve tried any number of times over the years to put chunking into soar. Sometimes I’ve tried to look at very narrowly just as way of combining, uh, rule-based processing, sometimes very broadly as ways of summarizing all of the kinds of things going on when Sigma reflects. Um, but I’ve never managed to pull it off. It’s, it’s interesting. Um,  

Middlebrooks    00:42:13    Is that a, is that, is the, is the challenge because you use probabilistic graphical mo, uh, networks as the sort of ground level implementation beneath Sigma?  

Rosenbloom    00:42:24    I haven’t been able to articulate exactly what the issue is, but I know that the, the core problem is finding a way to summarize the processing that goes on when you reflect in, in Sigma as sense of knowledge structures within Sigma, I mean, generalizations of the rules in, in Soar, which include things like Pablo graphical models and, and things that are hybrids between them and all these other things. So there’s a particular way that the summarization process happened in Soar, and I’ve been trying to mimic or mimic that in some ways in Sigma, and it’s just never quite worked out for all sorts of very subtle reasons. Um, unfortunately, I probably finished with Sigma since I’ve retired. I just don’t have what it takes to go back and, and pursue that further. So, but  

Middlebrooks    00:43:12    Yeah.  

Rosenbloom    00:43:13    But  

Middlebrooks    00:43:13    S Sigma itself is not finished.  

Rosenbloom    00:43:16    Uh, Sigma may be finished. Um, there are some students who may continue doing things on it, but as far as what I can do, it’s kind of finished, unfortunately. Um, so there may be an inspiration that hits me or hits someone else, and I just can’t resist or someone can’t resist going back and trying to make it happen. But, uh, but yeah, that’s, that’s a key kind of learning that I’ve never managed to get into Sigma.  

Middlebrooks    00:43:41    Hmm. Maybe you need to go to DARPA for another 10 years or something, and then <laugh> that’ll do it, but  

Rosenbloom    00:43:46    Different kinda learning in which did all sorts of additional things which we couldn’t do in soar. Although John has found various, has added multiple new mechanisms to deal with with those kinds of, some of those kinds of learning.  

Middlebrooks    00:43:59    So we, we didn’t really talk about, and I don’t know, we don’t need to get in the weeds about it, but we didn’t really illustrate, um, the actual architecture of soar. Uh, and then what, how Sigma, you know, how, how different it is and how similar it is.  

Rosenbloom    00:44:14    Okay. So the original sore, the way I think about it still, I mean, and the way it was sort of andall, I, I, I left the project in the late nineties. Um, it’s fairly simple conceptually. There’s a single long-term memory insisting of rules. Um, there’s a working memory, which is short-term memory. And the way things work is that the, each rule is a set of conditions and set of actions. The conditions match to patterns in working memory, which are these little simple structures or trees of symbols. And they generate new structures in working memory. Um, and so the, there’s a basic memory retrieval cycle of matching and executing rules in parallel, which augment what’s in working memory. So perception comes into working memory, and then you can view this memory as elaborating what you know about what you’re seeing. Um, so that’s a, a core part of the memory of what we call the knowledge cycle.  

Rosenbloom    00:45:16    Um, and that’s happening in small numbers of my, um, milliseconds in human time then. So that’s the kind of the, the, the innermost loop outside of that is the cognitive cycle, which as we were saying in humans is running a approximately 50 milliseconds. What happens there is you run the rule memory till exhaustion, till there are no more rules that can fire. So they’re firing waves of parallel rules until you’ve kind of exhausted what you know about the current situation. Part of what you retrieve or what called preferences in the original sore, they were little symbol structures. They say, prefer this operation over that one, or this operation is best in this situation. Later versions of sore, there were numeric preferences as well. You could then make a decision. If you could make a decision based on these preferences, you would choose an action to execute or you’d execute the action or do things like that. Um, if you couldn’t choose an action or apply it, that’s when you’d reach an impasse and you’d step back and reflect. And you’d give yourself a whole new space in which to think about that problem of why couldn’t you make a decision?  

Rosenbloom    00:46:25    Um, and so, and out of sequences of those kinds of things, you’d got often kinds of search and reasoning. And then once you’ve figured it out, you’d return a result, which meant the impasse. You could make a decision, the impasse would go away and new rules would be learned. Sam was kind of the essence of, of of Soar. The way I thought about it. John has since added, uh, two additional long-term memories. He’s added a, um, visual imagery memory. He’s added multiple learning mechanisms, uh, for reinforcement learning and for declarative learning and episodic learning. And so it’s a much more complex beast than it was back then. But that was kind of the essence of Soar as I thought about it.  

Middlebrooks    00:47:06    And then, uh, Sigma has a lot of similarities to soar. Um, and then, you know, eventually we’re gonna talk about, uh, what you, what, uh, is being called the common model of cognition that kind of, uh, brings all these principles together in an abstract way. But, but what did you see? How is Sigma, uh, different and similar to soar? What’s the main, main differences?  

Rosenbloom    00:47:29    So in some sense, the inspiration is similar. I was trying to kind of get by with one or small number of everything, but the kinds of things it has are more general. So the way I think about Sigma, um, is that it’s at least a two layered architecture. So there’s a cognitive architecture, which in some ways is like Soar. Below that is a graphical architecture, which is based on a generalization of what are called factor graphs, which is a form of these probabilistic graphical models. They’re all kind of ultimately due to, to Ute Peral and is work on on Bayesian networks. Mm-hmm. <affirmative>, uh, factor graphs are, are a very, very general form of these. Um, I got inspired by reading, um, a paper out of double E, which was anomaly about things like, um, error correction, coding theory and stuff. Mm-hmm. <affirmative>. But it showed to me, I forget unfortunately at the moment that the, the, the, the three authors on it, but the breadth of what you could do with factor graphs.  

Rosenbloom    00:48:29    And so I started to think about, okay, can I start with a level like those, but use them not only for kind of probabilistic reasoning, but for rule-based reasoning. And ultimately we got to points where we’re using them for neural network like reasoning and other kinds of things. So I built this generalized factor graph representation as a graphical model and then started, look, okay, how can I build rules? How can I build decision making? How can I build search, um, and learning all on the basis of these factor graphs and get things that were not only like so’s rule memory, but were like the later sos semantic memory or memory for facts where you can give a pattern and retrieve facts that are similar to it. It’s episodic memory where it records experiences that you can retrieve in various ways, as well as things that were much more probabilistic, um, and things that were hybrids between those because you had a kind of a deeper understanding of how they all related to each other.  

Rosenbloom    00:49:34    So sigma was kind of built on that intuition. So it is much more general unified memory model, which can do all three kinds of memories. So can do plusing a do neural network memories and other kinds of memories as well. But it still has a decision cycle that it’s now based on all these numbers floating around in these graphs. Um, and it has a sort like reflection capability, but again, not a sort like chunking capability. And we’ve found we can use it for perception. You can do both neural network like perception as well as as, uh, problem geographical model types of perception. Um, I did certain generalizations that generalized sort of over symbols and numbers where I start with a real number line and then allow discretization of it so that you can get integers and then allow assigning symbols to discrete regions so you can get symbol processing.  

Rosenbloom    00:50:29    And so I could get a range of things from symbols to numeric processing and that made it fairly easy to do things like visual imagery because I had this underlying indi dimensional, uh, numeric space that I could tap into. Cuz you could get, uh, there’s essentially a single number line, but you could get cross products of them so you could get arbitrary numbers and dimensions. So there are all sorts of things I could now do that I couldn’t do with, with Soar originally. And that’s again what kind of had me excited and drove me for the next 10 plus years.  

Middlebrooks    00:51:01    Well, this, you know, all that detail that you were just talking about reminds me of, uh, of a couple of your maxims <laugh>. Uh, one is that implementation is important to actually, you know, implement the, uh, the architecture, the ideas that you’re, uh, thinking of. But then I think the one just before that, um, which you mentioned might be in, in somewhat a conflict with the implementation is important, is to keep the, the insight is important. The, which is kind of a, the big picture is important, right? H h I mean, how have you done that? Uh, sorry, this is a bit of a diversion, but how have you done that when you have to work on these problems of how we’re gonna implement it, which inevitably must shape how you think about the, the process itself as well?  

Rosenbloom    00:51:44    Um, so fortunately I seem to be able to work at both those levels at once. Um, God,  

Middlebrooks    00:51:50    That is fortunate.  

Rosenbloom    00:51:50    There many people that are stuck at one level, one level or the other. Um, and they to me, are somewhat limited in what they can accomplish. I mean, they can very good at all the details are very good at high level thinking, but I’ve always been able to kind of maintain both in mind at the same time, which has taught me accomplish what, what I’ve been able to do. Um, and it had, it’s, it’s multiple levels. So there, there was kind of the grand idea of sigma, but within that, there, there I’m a bunch of smaller ideas which are, um, visionary in some sense also. Um, but then there’s the hard slogging of how to make all this work. And it’s, uh, I do get a lot, I have gotten a lot of my insights out of, out of doing that. Hmm. Some of the original ideas about how this works simply didn’t work out.  

Rosenbloom    00:52:34    And if I just stuck at a high level, I would not have realized it, that my insights would not have been as deep. Hmm. Um, but when you go down and actually figure out how to all make work and how to make it all work together, that’s sort of, to me, the key challenge in cart architectures, not the images or mechanisms, but how do they all smoothly work together to yield through their combination what you’re looking for? It’s always possible to slap together a bunch of modules and get something going, but to get them going generally and elegantly is, is kinda a key challenge for me.  

Middlebrooks    00:53:07    So not to belittle any other cognitive architectures, because, um, there’s this review paper. I know there’s like, I don’t know how many cognitive architectures there are. It’s in the eighties now that  

Rosenbloom    00:53:17    Are like Yeah, around a hundred or so, I mean plus or march hundred  

Middlebrooks    00:53:22    <laugh>. Yeah. And I’ve, I’ve had, um, Chris Eli Smith, uh, on the podcast and we’ve talked about his cognitive architecture spawn. I’ve had Randy O’Reilly and his cognitive architecture, Libra, and those are more on the neural side of things generally. Um, but so I, I would imagine the big three that often get mentioned in the same breath are Soar sigma and then Act R, which was John Anderson’s, um, psychological cognitive  

Rosenbloom    00:53:48    Architecture’s almost right  

Middlebrooks    00:53:50    <laugh>. Oh, is it? Oh, how’s it tell me I’m wrong. How, how am I wrong? What, what’s wrong about it?  

Rosenbloom    00:53:53    It was only a big two. So, and Act r I mean, so is kind of the leading one in, in the, with the AI focus in Act R, with the cognitive science focus, Sigma’s in the common model, because I was one of the leading developers of the common model, Uhhuh <affirmative>. Otherwise, in some of the later papers, they don’t always talk about Sigma. To me, sigma is a synthesis of much of the rest of it. I see. But if you talk, there’s really a big two, not a big three.  

Middlebrooks    00:54:17    Okay. Well,  

Rosenbloom    00:54:18    There’s a big three, but not for everyone.  

Middlebrooks    00:54:20    <laugh>. Okay. Well, we’ll get to talking about the common model, but, um, what is the, since quote unquote cognitive architectures are outside the mainstream somehow? What is the sense of community within, you know, between people, uh, working on their version of the cognitive architectures? Is it competition? Is it, um, you know, are people trying to synthesize things? Do they get along? Do they all hate each other? You know,  

Rosenbloom    00:54:46    <laugh>,  

Middlebrooks    00:54:47    <laugh>, um,  

Rosenbloom    00:54:48    So like any community or any research community, the other complex are, are dynamics. I mean, it’s full of people with, with egos and ambitions and, and things they’re trying to accomplish. Um, and it ranges from, I mean, so the big two architectures have had large communities working on them. There’s some where it’s just a single individual working on their own. Sometimes even individuals that’s not even in any of the core fields. They just feel they, they understand what intelligence is about. And there’s kind of everything in between. And there are good relationships and sometimes hard feelings. There are jealousies. I mean, there’s every meeting you can think of that you expect from, from a community. Um, the common model is, is a bit different in that it’s trying to bring people together to get them to agree, at least find out where they do agree.  

Rosenbloom    00:55:43    Mm-hmm. <affirmative>, so we’d call it consensus model. It’s trying to, to at least go out initially to folks who work on cognitive architectures and that okay, what do we as a community agree is true of a human-like cognitive architecture or human-like means either human or an AI architecture that adheres somewhat closely to how humans work. So there are some very widely different architectures in AI and agi. Mm-hmm. <affirmative>, there’s universal ai, which says there’s a single equation that that produces all of intelligence. Um, but for human-like ai, we thought there was a chance for conjunction rather than just disjunction. Um, and we’ve been surprised to find out we can get fairly far with that. I mean, it’s going kind of slowly at the moment where at the moment we’re working on topics like a motion and metacognition or reflection, uh, trying to add those two them. Mm-hmm. Cuz the original common model is fairly minimal in a variety of ways, but we were surprised we could get that much agreement across that community. So it, it works kind of across the cognitive science and ai, people who care about cognitive science community, uh, and those interested in architecture. It’s not something that the world that looks at individual mechanisms cares about. The neuroscience community doesn’t care about it. A g i, we’ve gotten some rapport with the a g I community on it. Uh, people like Ben Gerel and so on. Um, but  

Middlebrooks    00:57:12    What about philosophy? Has philosophy embraced it or have they had anything to say about it? Philosophy of mind, people.  

Rosenbloom    00:57:18    We haven’t had a lot of contact with the philosophy of mind folks about specifically things like the common model. Mm-hmm. <affirmative>, I know there’s, there’s a history, uh, people like Daniel denied for example, and, and so on. Um, and there is an interesting philosophy in mind community, but not so far, the common model hasn’t really made much contact with philosophy of mind.  

Middlebrooks    00:57:40    Well, it is, um, pretty minimal. And I, I’d love to hear the story in a moment of just, um, because it, it kind of arose out of a, uh, a meeting or a conference. But let, let me just, I’m gonna list off the five parts of it. Um, and then we can go from there. So at the, and the center of the hub of it all, the center of it all is, is a working memory, um, component. And, and this is not a working cognitive architecture. This is an abstract theoretical proposal for how mines are structured and function. Yes. Yeah. So, okay. So there’s this, um, core working memory component. There are two types of long-term, uh, memories. One, um, that builds in, uh, your knowledge, your semantic kind of memory or your knowledge. And another is more procedural based, uh, uh, in which skills. Skills, yeah. You learn skills and then there’s perception and there’s action. Uh, and, and those, the four things are kind of coming out, uh, on spokes from the, uh, working memory in which, where everything is bound or comes together, I suppose. And then perception and action are connected themselves as well. But that, that’s a, a pretty minimalistic model. But even you, you were surprised that among the cognitive architecture community that this, that this model is, uh, fairly accepted as a theoretical abstract entity.  

Rosenbloom    00:58:54    Right. So the, the, there’s a bit more too. There’s decisions which don’t make it into the figure. We attention distrust, whether, for example, they’re just part of the skill memory or whether it’s a separate module that should be shown. There, there are a number of assumptions that goes, that go along with this that try to make it a bit more specific and concrete. But yeah, it’s minimal, but still surprising, uh, that we could achieve consensus. Cuz if you look at that paper, for example, that you mentioned, which I think is, there were 80 something active architectures, but there were over a hundred that have been developed over the last four decades. Okay. Um, and you look at all those architectures described, very few of them follow that model. Oh, really? Um, so, but if you look more deeply have them, you can see them in terms of this model for many of them.  

Rosenbloom    00:59:42    So for a number of other architectures, we have mappings of them onto the common model. I see. But the way the common models expressed came about because, I mean, so the three of us who were, did the first version of it were, again, John and me and Christian Laber, who has worked for many years with John Anderson on Ann Act are, so, it was highly influenced about the way we, so we thought about sore and act r and sigma with me, with Sia Sigma, often pushing them in ways that were uncomfortable because I saw the, these things in a fundamentally different way. So Sigma, for example, doesn’t have too long-term memories. Right. But you could build both of those on top of singles, of sores, sorry, a sigma’s single long-term memory because it’s general in a way, regions of memory rather than distinct memories themselves. Um, so there were things like that we had to work out.  

Rosenbloom    01:00:30    Um, but it started, I mean, the, the first step was actually Christian and I, who I didn’t know very well at that point, at a meeting that was discussing the next, um, international conference on cognitive modeling. And we were both saying there’s all these small specialized conferences each, which captures a small bit of what goes on a cognitive architectures. So I C C M, this conference, we were just talking about really focused on cognitive modeling, human like things. There was agi, there’s bico, which is biologically inspired cognitive architectures. There’s advances in cognitive systems, which is high level symbolic cognitive architectures, but there wasn’t any venue where you could interchange all these different perspectives on the topic.  

Rosenbloom    01:01:18    So out of that, we created a, what was called a Triple AI symposium on integrated cognition, which Christian and I led. And he, I kind of, I think I probably opened the, the, the, the symposium. He closed it and he summarized his thoughts from it. And his summary was in terms of what he called a standard model of what he saw was common to, at that, at the symposium, we did have people from neural approaches and AGI I approaches and cognitive approaches and AI approaches. Can’t remember we if we had Newman from philosophy or other things. But from what Christian saw, there’s a smaller number of things he thought were in common. And so he put it up on a slide, is the standard model of the mind. And the thing that shocked us was every one of the room at the time said, yeah, that seems right.  

Rosenbloom    01:02:07    Um, that totally surprised us and gave us the kernel of hope that we could build on that to do something more substantially. John, who was at the workshop and worked closely with me and new Christian, well joined us early on. And that led to that paper on the standard model of the mind that appeared in the AI magazine, which later got returned as the common model of cognition because, so we held two further symposia on this common model idea. The first one was on model of the mind. One of the kinds of pushbacks we got was there was fear because of the notion of the standard model in, in linguistics that we were going to drop down, impose something on the field. So there are folks who are feeling quite threatened by the whole thing. Mm-hmm. <affirmative>, there are some who still feel threatened and feel like we’ve left them out. Um, but, so one of the things we did as a community was hold a poll and renamed it. Um, we did individual pieces. So standard, common, different alternatives.  

Middlebrooks    01:03:08    Okay.  

Rosenbloom    01:03:09    Model of mind, whatever. Um, and we did polls and we ended up with a common model of cognition. That’s the name for what we were doing. Nice.  

Middlebrooks    01:03:17    You know, my, my sort of knee-jerk reaction, um, in reading about the common model and, and, you know, learning about this story and how everyone more or less agreed is, is wondering whether that’s due to a bias in the cognitive architecture community that of course everyone agrees because we all have the same kind of constructs of what, uh, psychologically we’re supposed to be doing, because we all use the same words for psychology. But it, it doesn’t necessarily mean that we have the ontological psychological categories. Correct. Anyway, but that’s our common language. So do you think any of it has to do with our, our bias as you know, or sorry, the, you know, the cognitive architecture community’s bias in thinking in, in those terms.  

Rosenbloom    01:04:07    So that’s certainly a possibility. And particularly with respect to the terminology, I would hope though that the ideas transcend the terminology. Yeah. And that even if you say we’re biased in the terminology that some of the core ideas hold, but we’ve been trying to challenge it by, um, mapping additional architectures onto it mm-hmm. <affirmative>, so to see, so we started with sort of the three and what we knew more broadly. But if a wider range of people map on and say, here’s what, what it’s missing or here’s how you’re thinking about it wrong, that provides food. I thought the, something we haven’t talked about yet, that I know you wanted to talk about was the mapping of the common model onto the brain. Mm-hmm. <affirmative>, it’s another one that is showing us that to some extent, we’re thinking right about it, whether or not the terminology is the best way of, of thinking about it.  

Rosenbloom    01:04:56    So we’re hoping to challenge it. Um, ultimately we hope that anything that’s human-like, and again, you have to be careful that you don’t say, okay, that didn’t map onto it. So if it’s not human-like and therefore we don’t have to worry about it. Right. So you gotta be careful about these kinds of things. But if we can get the, this, the, the wider community of folks working on human-like architectures to map their architectures onto it, and so we understand how all these things relate, that’s again, part of the hope for the common knowledge provides kinda an lingual for understanding the relationships among these models. Hmm. That will get beyond the kind of the initial starting point and concerns about whether it’s biased because of where it started. Hmm. But we’ll see how it goes. So with, with that whole effort, we’re trying to serve as facilitators rather than leaders.  

Rosenbloom    01:05:47    So we started eight working groups and assigned a leader. We got leaders for each of them, which went the three of us. Um, unfortunately they got far enough to produce some papers for the, the second common model workshop, but they each had their own things they’re working on. So it kind of died out. So the three of us are taking it back on working with Andrea Staco as well, and seeing if we can push some things behind the scenes and then bring the community back on to see if we can get them kind of re-energized.  

Middlebrooks    01:06:19    Well, I, I read through, uh, a bunch of the papers that, um, that you guys, I guess commissioned, uh, you know, so, so you put out this common model of cognition bulletin where basically you’re asking other cognitive architecture folks, like you were just saying to, you know, reflect on their own cognitive architectures and, and offer what they think might be missing to the common model of, of cognition. Um, but a lot of them seemed just like advertisements for their own cognitive architectures. I was, I’m one wondering like what you got out of reading those papers of, of that input. And then maybe we can talk about some of the things that may or may not be missing. Because a lot of those papers said, well, this is missing and my cognitive architecture has it. You know.  

Rosenbloom    01:07:04    Right. Um, so a lot of it is, um, it shouldn’t be surprising. So I try to, uh, be accepting, uh, the bulletin actually, uh, the common model bulletin is an interesting case in point. So Rob West, uh, who I think is the University of Waterloo in, in Canada, uh, he’s been one of the more active figures in the common model community and cognitive architecture community in general. He’s traditionally come out of the Act R world. Um, he stepped up and volunteered to do that on his own. Oh. So he said, sounds great, please do it. Um, and so it’s, it’s trying to be repository for the papers that can publish wherever they are on that are related to the common model. Oh. Um, papers that came out of those working groups, what we hoped were that they would take a, a look. So they were on topics like emotion and metacognition and language and, um, neuro neuroscience basis.  

Rosenbloom    01:08:02    We hope they kind of do a summary of what the issues were for the common model in those particular areas. And some of it ends up being advertisements for individual architectures, uh, some of which you were talking about the architecture, and here’s what’s missing. My architecture have, that’s actually a good thing. Cuz that was mapping the architectures onto the common model. Right. And it was saying what they thought the problems were, and that’s some of the kind of feedback we needed to hear it was from their perspective. But that’s Brett’s perfectly natural and where they have expertise. So, I mean, it’s a mindset to work on the common model. And for people working on their individual architectures, it’s hard to shift perspective. Of  

Middlebrooks    01:08:46    Course.  

Rosenbloom    01:08:47    And so we’re trying to help the field get the perspective and to be able to combine that. I mean, they’re not gonna shift away from working on the individual architecture, but we’re hoping is again, they can combine that perspective with the perspective of thinking about the common model and kind of essentially do those two kinds of things in parallel. That would be kind of an ideal world.  

Middlebrooks    01:09:10    But it again, you know, just <laugh>, I was kind of frustrated reading a lot of the papers because it felt like peop a lot of people weren’t willing to play that game. Right. They, they only wanted to stay in their own world and talk about their own thing. So I, I can use can use one example, I I, because I’ve had him on the, uh, podcast like Stephen Grossberg and his, uh, adaptive Resonance theory, you know, his entry red, like any of his other entries, not, you know, uh, he’s been largely influential and, and adaptive resonance theory has, is really old and has done a lot of really cool stuff. But there was no connection to the common model of cognition that I could see. And that was the, the case for a lot of the different, um, entries I suppose, or papers. But did you, uh, you know, what did you gain from reading those? Was there a common theme that you’ve, you’re thinking, oh, maybe this is one of the important things like affect, motivation, et cetera, that we should add?  

Rosenbloom    01:10:03    It would be nice if we could get more people. I mean, so Andrea Staco is, is a good example. He’s someone else who grew up to some extent than the act art community. He’s got more of the neuroscience perspective. He said the University of Washington. Um, but he’s totally bought into the common model perspective. And so we invited him to join us to bring additional perspective in and to bring a smart kind of younger person into kind of the core people group of people thinking about this. Um, and it just was completely natural for him to think in this fashion. Hmm. And for some people it is, and other, for other people, it just isn’t. And it may, excuse me, it may never be. So what we try to get from people is what we can, we try to enculturate them as, as as possible. Um, and hope that things will grow in some, in some kind of natural fashion. Um, I won’t say it’s not frustrating at times, but, um, to me it’s just the natural order of things. And so you do your best given the nature of, of everyone you’re working with.  

Middlebrooks    01:11:08    I’ll highlight that you said enculturate and not indoctrinate there. So, which is, which is major distinction. What, um, I I’ll just ask, you know, bec because you know, there are a bunch of things thrown out that the common model must be missing. Uh, but what, what are your thoughts on consciousness subjective experience? Is that, you know, is that an important part?  

Rosenbloom    01:11:28    Well, so the problem with consciousness is no one knows what they’re talking about.  

Middlebrooks    01:11:31    The one problem with consciousness <laugh>.  

Rosenbloom    01:11:34    Right, right. There’s one thought with consciousness, no one knows, knows what they’re talking about. Yeah. Um, when you read literature that mentions consciousness or metacognition or reflection, there are many different kinds of things people are talking about. Sure. And so I say one of the things we are talking about among the four of us is metacognition and reflection, which is one aspect of, I think of the, the full world of other people talk about with cautiousness. It’s, it’s not, um, the notion of phenomena. It’s not quite the right word, but  

Middlebrooks    01:12:10    Phenomenology.  

Rosenbloom    01:12:11    Yeah. Um, cuz to me that’s still mysticism much of that. Um, but it is this notion of including the sense of self, model of self mm-hmm. <affirmative> the ability to reflect on what you’re thinking. Um, so it’s a set of concrete capabilities that we think are part of what goes by the term consciousness that we think are in fact missing from the common model at this point, but exist in some of the architectures, both sore and sigma have aspects of it. Mm-hmm. <affirmative>, other architectures certainly have aspects of it as well. So we know it’s missing. We knew it was missing at the beginning. We just didn’t think there was a, a consensus. So as you said, the common model abstract, it was also deliberately incomplete. Mm-hmm. <affirmative>. Cause we’re only looking to include parts that there isn’t consensus on. And so we erred on the, on the side of incompleteness rather than trying to force, um, consensus where there wasn’t any, but absolutely that emotion.  

Rosenbloom    01:13:09    Um, a lot of people in AI think emotion is this epiphenomenal thing you don’t need to worry about. Uh, their folks in psychology thinks it’s the center of everything. Right. Um, I think those of us working on the common model actually believe it is terribly important for cognition. I mean, I’ve referred to it as the wisdom of evolution. Um, it provides forcing functions for us to do things in certain ways that we don’t want to necessarily do. But evolutionists decided is wise for us now that wisdom is very coarse, and so it often steers us wrong, but often, but sometimes, but to me it’s, I mean, it involves physiology. It involves thought, um, involves the architecture. Emotions change how we think, they change how we behave. Most of the work on emotion and AI is what I call cold emotion. It’s all reasoning about emotion.  

Rosenbloom    01:14:02    Yeah. So works just as fine thinking about other postings, people’s emotion as it is about yourself. It’s all reasoning about it. And then there’s emotional, I mean, physiological stuff where clearly things are changing in how your body works. But again, as an architectural person, the part I care about most is how does the architecture sense emotional states and how does it’s functionally change as a, as a function of that. So when you’re angry, you think differently. It’s not just a symbol in your working memory that you’re angry and it’s not just chemicals, it actually changes how you think. So how does your architecture understand that state and what does that actually change about how it operates? Those, to me are kind of like the central things about emotion. Not everyone agrees with me about that of course. But that’s the kind of stuff I’m pushing when we talk about emotion in the, uh, um, in the common model, we actually held a, an on a, a virtual workshop on the topic. Uh, got a lot of good input and got some consensus at the end of that, uh, which again was surprising. But the fun. And right now we’re trying to just build on that consensus to come up with a more concrete and more elaborate proposal that we can come back to the community to talk about.  

Middlebrooks    01:15:16    This is kind of an aside, but I was gonna ask you, um, you know, given your interest in computing as the fourth great discipline, um,  

Rosenbloom    01:15:25    Scientific domain  

Middlebrooks    01:15:26    Yeah. Domain, sorry. Um, you know, since just, just going off of what you were talking about with emotion and physiology, do you think of the mind as, uh, all computational or is there something else to minds?  

Rosenbloom    01:15:40    It depends on what you mean by computational.  

Middlebrooks    01:15:43    Um, okay. We can say touring comput. Well, uh, let’s say it Well, I’ll let you, I’ll let you actually instead of pin you into a corner. Uh, is it all computations all the way down?  

Rosenbloom    01:15:54    So I mean, there are people who have very narrow notions of what it means for them to be computational. Okay. So, for example, they limit to that to simple processing. And when you talk about numeric stuff, they think you’re not talking about computation any longer. Even though computers of course do computation. And in fact it’s all grounded in digital processing on, on computers. Um, there are theories like digital physics, which hypothesize that the whole universe is, is basically a giant computer now mm-hmm. <affirmative>, of course, that’s terribly controversial and not at all established. So there’s the possibility that everything is, is computational at its base. Um, I don’t rule out analog computation as part of the world of computation as well. When you’re talking about human minds and human bodies, there’s clearly mean electrical and chemical. And, uh, there may be quantum things. Some people, there are folks who want to always push intelligence into the things we don’t understand. Right. Quantum is one of the favorite aspects of that right now.  

Middlebrooks    01:16:53    Yeah.  

Rosenbloom    01:16:54    Um, but of course there’s quantum computation, so maybe we’ll understand that in terms of the form of computation. Um, so I almost don’t pair with you call chemical processing computation.  

Middlebrooks    01:17:06    Okay.  

Rosenbloom    01:17:07    I mean, it’s an interesting scientific question as to whether ultimately that’s grounded in computation. I do believe that chemical signals matter in thinking, um, they’re at a pretty low level, but pervasive, um, level, um, whether they’re all captured by the kinds of things, I mean, we can do pervasive things computationally as well. Whether we capture the effects of those chemical signals is a whole separate question. My guess is we haven’t, and that when you talk about emotions, that’s part of clearly what’s going on. And you have to understand how to capture the kind of pervasive low level effects that happened from that kind of thing.  

Middlebrooks    01:17:49    Let me see how to phrase this.  

Middlebrooks    01:17:52    So on the one hand, uh, in real brains, we’ve, you know, learned that it’s a really highly integrative system. Um, and there’s a lot of recurrence and interaction. Uh, uh, we’ve also learned that, you know, due to sort of a systems neuroscience effect using neuromodulators, it can switch these, um, circuits into different regimes of behavior. Right. So that, um, and we’ve also learned that, um, cognitive functions are multiply realizable, uh, but are, are these kinds of principles that are coming out of neuroscience and, um, and computer science are tho are those part of the common model, um, sort of the backbone and infrastructure. And sorry to ask another question on top of it, how do you think about the modularity of both cognitive architectures and the common model, um, with respect to what we, what we quote unquote, uh, are, are discovering, uh, more and more about brains that it’s, it’s more integrative and interactive than the, the strict modularity that we used to think.  

Rosenbloom    01:18:54    So, in some sense, I’m the wrong person to answer that question because neuroscience is a fairly large blind spot for me. Um, the brain, I’ve never, you think I would be interested in the brain, but I was never that interested brain. I  

Middlebrooks    01:19:07    Would think so.  

Rosenbloom    01:19:08    Mind the brain was this messy biological thing and I, I never liked biology <laugh>. Um, so it’s always been a large blind spot for me. It’s just something I’ve had to accept. It was for Alan also. And, and it actually made more sense for him than for me because in other days of ai, what was going into symbolic aos so far from what people were understanding neuroscience, that there was just not a lot of room to talk. Yeah. Um, now with neuro neuroscience, the world is, it’s a very different world. Um, but Christian and more so Andrea in the common model world are much more up on that kind of thing than I am. Um, so they bring a general background. I mean, so AAR had earlier been mapped onto the brain and Right. And he’d worked with, with Randall O’Reilly on Sal, which is combination Baar and Libra. Um, so there words, those kinds of connections. And, and Dre is, is more deeply rooted in that world. Many of the kinds of general things you are talking about from neuroscience, where I can’t say are explicitly in the common model, um, to some extent Sigma might reflect them more than, than the common model  

Middlebrooks    01:20:11    Because of the generality Yeah. Of the different, right. Yeah.  

Rosenbloom    01:20:15    It says the module boundaries in the common model aren’t quite as real as we make them seem in the common model. That they are just different regimes or different portions of a broader, more general memory. Um, that may, so graphical architecture might be more like the brain. And then even Sigma’s cognitive architecture is still more like it because it only effectively has one long-term memory. Hmm. The divergence into two in a common model and three into, so happened kind of above the level of the cognitive model of cognitive architecture and sigma. So some of that may be reflected and hopefully we’ll see more of that coming out as time goes on. I’m really hoping that the stuff that, that Andrea Daco is doing and, um, in mapping sigma onto, let’s say the macro level of brain of, of functional circuits in the brain Yeah. And the communication patterns among those circuits, um, that will intrigue some of the neuroscience community. And so we’ll be able to see more interaction of that sort.  

Middlebrooks    01:21:23    C can you, let’s, let’s, before we move on, can you describe that that’s as, um, the first author is Andrea Staco and it’s comparing predictions made from the common model, uh, two predictions made from other types of kind of connective, um, connectome and functional, uh, network models, um, that are, uh, sort of popular. I mean, it, it tested it against like a few, uh, different versions of that. Um, maybe you can just describe a little bit more what, what happened and, and, um, and where the common model fares, of course, <laugh>  

Rosenbloom    01:21:56    Sure. I’ll, I’ll try. But again, Andrea, the expert on all of this. So you mentioned kind of the basic, um, components in the common model. There’s a long-term memory of working memory, uh, and, um, perception and, and motor modules. And they all kind of connect through that working memory. So it hypothesis of particular, you could say connectome between those modules. So what Andrea did, and you could say it’s built on, uh, earlier what had been done with Act R. He mapped the modules in the common model onto functional circuits in the brain. They’re not quite brain regions, but there are functional circuits. And then looked at the human connectome data to ask, okay, how do those different functional circuits communicate and how does that compare to the connections we have in the common model? And what he turned up was that in fact, the common model approach provided a much better model, at least for the seven or so domains of the human connectome tasks that he looked at.  

Rosenbloom    01:22:59    Then you got from the traditional hub and spoke and hierarchical models from neuroscience. And that was, was kind of really, that was completely unexpected for us. We didn’t think the common model that’s abstract and incomplete have necessarily anything to do with the, the brain pretty showed that in fact there was this really neat macro level connection, and then it might tell neuroscience some then it might be able to provide feedback to the common model from more of what’s being learned in neuroscience. So that’s essentially what’s going on. And he applied some kind of interesting ne techniques for being able to establish that these kinds of connections were a better model than what people were traditionally looking at.  

Middlebrooks    01:23:36    Not, uh, not to beat a dead horse at all. But going back to the question of bias, right? The, the kinds of tasks that we run in labs to come up with these functional, um, circuits, um, using the psychological terms that we use. Do you think there’s any particular issue with, you know, testing, working memory, uh, you know, a very specific time working memory task in the lab? And then voila, there’s a working memory circuit. And since the common model has a working memory module, it, it maps really well onto this, but it assu, it’s, it’s, uh, in some sense could be putting the carrot before the whole, the cart bef <laugh>, I don’t remember the phrase in, uh, eating its own, uh, a snake eating its own tail, right. Because it defined the things to test in the lab that it is purporting to explain.  

Rosenbloom    01:24:27    Right? I mean, so you’ve raised the, the whole humongous issue of ecological validity, of psychological experimentation.  

Middlebrooks    01:24:35    Thank you. I could have said it much faster, but thank you. Thanks for doing that for me.  

Rosenbloom    01:24:39    And I can’t say we have any additional solution than the, than the field is all have, but, but yes, it is potentially vulnerable to those same kinds of critiques mm-hmm.  

Middlebrooks    01:24:48    <affirmative>. Okay. Fair enough. What do you think of the current boom of deep learning? I, I know that some of the cognitive architectures are incorporating deep learning networks, but as you’re, you know, talking just earlier, the deep learning world is, is sort of taking this cognitive architecture, um, I don’t know if it’s inspiration, but a approach in some sense and starting to piece together these various narrow networks and to work towards something more general. So what is your broad overview or thoughts?  

Rosenbloom    01:25:18    Okay. So as an older person, I’m <laugh> viewing, um, as entertainment to some extent <laugh> mm-hmm.  

Middlebrooks    01:25:28    <affirmative>.  

Rosenbloom    01:25:29    Um, I like that. I mean, I try to <laugh>, I mean, as, as the person, as the kind of person they’re often railing against, I could get very defensive about it. And there are some folks in our community who do. Um, but to me it’s, it’s par for the course. It’s the standard operating procedure. So very community that for many years felt suppressed. Uh, there’s the Minsky Pepper book and early results, which kind of showed that the simplest versions of these models couldn’t do what people thought they could. And that effectively killed off the field for, for a decade except for people like Roseburg and others who stuck with and Jeff Hinton Yeah. And others. So they have a large ship on their shoulder about, uh, Jeff has even talked about how Alan Newell and Herb Simon misdirected the field for a long time. I think he’s completely inappropriate and what he’s saying in those kinds of things, but, um, but I understand the chip they have on their shoulder.  

Rosenbloom    01:26:27    Um, so I’m watching it from this perspective saying AI was like that in its infancy. It was separating itself out from everything else, and it was wildly ambitious and completely over optimistic. And, um, and that’s just completely natural when all of a sudden you’re at the point where there’s stuff you’ve been working on, you see, oh, it can do all this stuff. And so they’re pushed on by their success, which is completely natural. So all the criticisms and when they’re accurate, but they’re completely understandable as far as I’m concerned, and, and they have gotten way beyond, I thought they would be able to with their current set of very simple stack in technology. Hmm. I mean, if you look over the history of a, of AI or probably in science in general, they’re always booms. Uh, when something interesting happens, and they always ask ’em towed out, but you can never tell when or why or how high. Um, and so those, when people ask me, they say, yeah, it’s gonna asim, I just can’t tell, I can’t answer those questions. And it turns out they’ve asim to at a much higher level than I ever would’ve expected. I mean, and mean they might mean not have even reached an Asim tote. Right, right. General models are doing right now. It’s just, I don’t understand how they could possibly do what they’re doing, even though I understand what the technology’s actually doing. Um,  

Middlebrooks    01:27:51    You don’t think this with such compute and such huge volumes of data that it, uh, and it’s just making those statistical correlations and going for it.  

Rosenbloom    01:27:59    Well, I understand all of that. Yeah. But I wouldn’t have Well, that, just doing that, would you, so I played with Tachi et t many people have, it just shouldn’t be my model of, of large amounts of data and statistical correlations and prediction and transformers and whatever else they use. I just don’t see how it accomplishes it. So there’s this big gap in my mind, um, where it is just doing too good a job, even with all implementations. Hmm. You know, it’s getting things wrong, and I know, and you know, it doesn’t know when it’s getting things wrong. Right. And, you know, it’s, I mean, it’s shallow and it’s all these other limitations, but it’s still incredible. And I just, it’s like magic. Um,  

Middlebrooks    01:28:40    Do, do you think that in any of our, like one interpretation of that is that wow, it is, uh, you know, let’s say a, a large language model that generates, you know, really rich text and it’s very impressive. One interpretation is Wow, it’s like almost as great as us, but another interpretation is, ooh, maybe our language is not, uh, that impressive after all. Like that it’s not the height, height of cognition or something.  

Rosenbloom    01:29:03    So, I mean, I think the secret really is that, I mean, in generating a language model, they’re looking at text describing everything we know. Yeah. And so implicitly in the language model includes aspects of everything we know, or almost everything. And so there’s a way, it’s a summary of all knowledge in the way the web is, but in a different form. Mm-hmm. <affirmative>. Um, but the fact that it coherently can’t coherently at a larger scale than individual sentences produce useful answers is just kind of incredible. Hmm. But still, as I said, I think it’s a, it’s the next great model of the subconscious, uh, which means it’s missing all aspects that are required for, for full intelligence and in, in order to counter the weaknesses to have which the subconscious has as well. Hmm. Um, and I think there are variety of members of the community that are coming to realize that, and they’re looking for alternatives. Whether they’ll find a fully disable version like the Beyond Wants or there will be something else, um, is, is an open question. Um, but I’m sure they will be pushed in the direction of, of adding more of the capabilities at ti as time goes by, because even though I’m sure they’re are enjoying and pushing as hard as they can and all the applications of what they’ve got, they’re also looking at what the limitations are and how to overcome them.  

Middlebrooks    01:30:22    Yeah. You, uh, uh, wrote in your memoir, I believe you wrote about that in your memoir, unless I’m misremembering and I saw it in an interview or something, uh, that you get a lot more satisfaction and intellectual, uh, inspiration out of going to AGI conferences rather than, uh, you know, psychology or computer science or any of those other fields that you’re kind of a part of. Uh, I just wanna, uh, ask you about that and maybe you can just, um, reflect on why that has.  

Rosenbloom    01:30:53    Well, so they’re, they’re clearly quite different. I mean, for people who don’t know them, the a g i arch conferences, at least traditionally, were kind of all over the place. Um, lots of people spouting big ideas, um, often with little support  

Middlebrooks    01:31:08    That’s different than these days.  

Rosenbloom    01:31:11    I haven’t been recently because I’m okay. I mean, the pandemic and I’m limited. But, um, my guess is it’s that there’s a, a bit more rigor now, but I, I don’t know if that’s actually true, I suppose, through a traditional conference, which, which, which matters most is the rigor and much less, whether you’re saying anything interesting. Um, so if you’ve done your experiments properly and it’s something new, even though it doesn’t matter, you can get accepted usually. And to me, as I mentioned, what I care most about are ideas that get me thinking in new ways, and the next incremental idea just leaves me flat. So that’s why I’ve often enjoyed the A g I conferences. They’re people thinking big. Um, they may of course be totally wrong, um, but they give me thinking, Hmm. And that’s really what I care about in at a conference. I’m gonna go someplace and get me p get where people will get me thinking in a different way than I normally do.  

Middlebrooks    01:32:06    You think other  

Rosenbloom    01:32:07    Ally, that will happen at regular conference as well, there are very groundbreaking where published at regular conferences, but the, the vast majority is just incremental progress on existing topics.  

Middlebrooks    01:32:17    So it’s really suited to your, uh, personality and desire as an explorer, I suppose, to, um, to seek that sort of inspiration rather than necessarily the answer to the next question that you’re asking or something.  

Rosenbloom    01:32:30    Uh, say most of my colleagues in the academic cognitive science world, um, consider a a g I abhorrent. Right.  

Middlebrooks    01:32:37    That’s why I was asking Yeah. Bunch  

Rosenbloom    01:32:39    Of loose cannons that have no methodology whatsoever and just spout off whatever they wanna say. Yeah. Um, I’ll say, yep, it’s true, but I still find them interesting.  

Middlebrooks    01:32:49    And they dress different. A lot of ’em. Yeah. <laugh>,  

Middlebrooks    01:32:54    You know what, before I let you go, I just want, I would be a amaz if I didn’t mention one of the things I really appreciated that you wrote about. Um, and I, I don’t need to ask you a question about it. I just appreciated your self realization that at different points in your life, you have come to appreciate other facets of science and, um, been been able to take on board different ways of thinking. And I think that that is an underappreciated, um, realization that more people should be, uh, should take on in their own lives and kind of more accept who they are at the time in that part of their life.  

Rosenbloom    01:33:28    I completely agree with that. I made a wrote down partly because it seems like I’ve never seen that mentioned anywhere. Yeah. And it clearly describes my life, and I assume it describes most people’s lives. I  

Middlebrooks    01:33:37    Think so,  

Rosenbloom    01:33:38    But it’s never kind of articulated and it helps people understand themselves and to accept how they are at that point and think about, well, I might be different in the future, um, and sort of just look forward to see what happens. Yeah.  

Middlebrooks    01:33:54    So it, it’s a beautiful way to think about it. Okay, Paul, I’ve take, I’ve taken you long enough. I really appreciate your time. Thanks for being on.  

Rosenbloom    01:34:00    Okay. And thank you again. Okay. Bye.  

Middlebrooks    01:34:18    I alone produce Brain inspired. If you value this podcast, consider supporting it through Patreon to access full versions of all the episodes and to join our Discord community. Or if you wanna learn more about the intersection of neuroscience and ai, consider signing up for my online course, neuro ai, the quest to Explain intelligence. Go to brandin inspired.co to learn more, to get in touch with me, emailPaul@brandinspired.co. You’re hearing music by the new year. Find them@thenewyear.net. Thank you. Thank you for your support. See you next time.