Wave

hopefully you'd implement metacognition as cognition about cognition, right?

scijones

For this one, I'm talking about the Soar Cognitive Architecture as AI engineering software, and not so much about it in relation to humans. This software attempts to support cognition. Core to this is a procedural memory that breaks decision making into phases.

At least every 50ms, the architecture goes through the phases of 1) proposing different options for (changes to) its actions, selecting among those options, and determining how to actually do the thing it selected. Those are called “propose, decide, apply”.

That's the core loop of how Soar implements cognition.

Each of these phases are implemented as part of the architecture, but the specifics of how a particular agent proceeds through those phases depends on the knowledge it has. For example, let's consider an agent navigating a maze and pretend the agent hits an intersection. One way the agent could “propose” is it could propose movement to each branch at the intersection. What to “select”? Well, I can imagine an agent without knowledge of the map of the maze simply wouldn't really have enough knowledge to select among those options. But, let's suppose the agent has a map. A classic form of of planning is to search through the implications of actions, as hypotheticals. “According to the map, what would happen if I went this way? Well, I'd face this next intersection. What if I go that way after that?” (This is just classic search-based planning.)

The way Soar affords this kind of search is through “subgoaling”. When an agent has multiple options and doesn't know what to pick, it creates a subgoal. This can be recursive (allowing the agent to chain hypothetical situations). Soar isn't limited to classic search-based planning, but hopefully it's illustrative.

More generally, Soar creates subgoals when it doesn't have the knowledge it needs to proceed through a phase of cognition.

From “Universal Subgoaling and Chunking: The Automatic Generation and Learning of Goal Hierarchies” page 33: “Universal subgoaling is the ability of the agent to set up subgoals for all possible difficulties that it can face. Although these difficulties can arise in an indefinite number of different contexts, the taxonomy of difficulties must be a small and well-defined set.”

It might not have sufficient options, might not know what to pick, or might not know how to implement/apply what it selected. That's the taxonomy. Because Soar represents procedural memory as being composed of knowledge for these phases, we claim it has “universal subgoaling”. In other words, no matter what procedural knowledge it is missing, it has a way to create a goal of learning it.

Now, that's all well and good, but knowing what knowledge you're missing and being able to actually learn it are two different things. I provided the example of search-based planning as one way to learn in response to a lack of knowledge for what to select. We've implemented A* search, means-ends analysis, iterative-deepening search, all sorts of stuff for learning in response to subgoals from that phase.

Stepping back though, if cognition usually means proceding through those phases, we can conceive of metacognition as processing about one's ability to proceed through those phases.

Coming back to search-based planning, a classic difficulty for forward search in cluttered environments is a huge branching factor. There's just too many options to consider in a reasonable amount of time and none clearly make progress to a goal. Maybe in some situations, such as navigation, a forward search like A* search is useful, but it's not great for determining the steps to clean the kitchen. An agent has all sorts of affordances most of the time. Instead, search backwards from the goal(s) like “clean dishes in the dish rack” might be more efficient. The agent need not even consider some actions like walking into another room becasue they aren't relevant to its current goals. This is a case where means-ends analysis planning is much more efficient for determining how to achieve the goal. (Working backwards from the goal, you quickly identify placing a dried dish, drying a dish, cleaning a dish.) As agent designers, we could encode a heuristic based on branching factor or perhaps the type of goal to select the form of planning.

But, alternatively, let's suppose this agent itself is faced with the problem of not knowing what problem solving method to employ. Well, here's what the agent's cognitive state might look like as a stack: clean the kitchen <- determine next step <- determine how to determine the next step. The agent could have a subgoal of determining which way to think about the problem, and this can arise naturally from Soar's subgoaling.

This means that in some cases, we can implement agents with metacognition in Soar. So, in an engineering sense, Soar supports metacognition in a natural way just as an extension to how it already implements cognition. The implementation is literally the agent thinking about its thinking (insomuch as you are willing to conceive of its cognitive cycle as thought). I just think that's neat.

I've been focusing on what happens when an agent does not have enough knowledge in the “decide” phase (can't pick between options). I don't have metacognition worked out, by any means, but I can imagine similar stories for the other phases. And, frankly, this is a blog post with me spitballing, but I'm just genuinely pleasantly surprised by how naturally this sort of thing seems supported in Soar.

About the Author

scijones

I like to write highly speculative AI and CS ideas.

Mia Rose WinterReviewer

This might also interest you

The Quest for Ethical AI: Actually saving time with generated commit message bodies

Mia Rose Winter 11/11/2025

The Total Hatred For AI in Tech If you have existed on the planet earth in the last 36 months you have been undoubtedly been exposed to a slew of AI tools and integrations, half of which are questionably executed and the other half is questionable if it even is AI. With all of that, coming right off of the crypto boom especially the tech-savvy have immediately questioned this hype and over the months grew to hate it with a fury. I do not except myself from that, I was there. For the first months what I previously followed as promising new tech got turned on its head overnight by capitalist pieces of shit at openAI and friends and completely soured my mood for anything that has proclaimed itself AI, and I myself got caught in the rumor hate mill: AI uses 200 quadrillion times more power than a google search, AI uses oceans of water, we need to double data centers because of AI, AI will kill us all, AI stolen my bicycle. As I do not like blindly hating and I also started to distrust how

AITutorial

Spreading activation, a quaint and useful cogsci AI idea

scijones 2/27/2024

Some older cogsci AI ideas can seem quaint in the modern AI practice. One of those that is still relevant, but just plain simple in the face of modern models, is spreading activation. Imagine modeling human knowledge literally as a labeled directed graph. You can actually model some interesting aspects of human cognition with simple algorithms defined over such a graph. Spreading activation is that kind of thing. There was this notion that we could implement semantic memory knowledge with a graph. Imagine embodied concepts of colors as nodes, words also being separate nodes, and edges that also have labels connecting things together. So, you could have a mental representation of &ldquo;red&rdquo; the color, an associated name for the color that is actually the string &ldquo;red&rdquo;, and an edge from the actual color concept to the string label called &ldquo;label&rdquo; or &ldquo;name&rdquo;. We're talking that level of just throwing things into a graph and saying &ldquo;sure, that'

AIInfodump

thinking about parallel computing for thinking

scijones 2/7/2024

Soar is a &ldquo;cognitive architecture&rdquo;. It attempts to implement the necessary components to support general intelligence, but it's not like crypto/AI/techbro hype. It's a decades long academic research project, and it has been useful for some robotics and cognitive modeling. Soar is single core. With increasing availability of parallel computing, it may seem odd that an architecture that attempts to implement the basis for intelligence is largely single core. While aspects of our code-base could be parallelized for some performance gain, it seems that some serial bottleneck is inescapable and appears humanlike, at least for complex decision-making. This got me wondering whether underlying constraints could exist to demand that this must be the case. This led me to thinking about a swarm intelligence and about whether an intelligence embodied in a swarm would face the same constraints. From what I can tell, in such an intelligence, you would want the distributed knowledge in th

AIOpinionInfodump
Powered by Wave