For this one, I'm talking about the Soar Cognitive Architecture as AI engineering software, and not so much about it in relation to humans. This software attempts to support cognition. Core to this is a procedural memory that breaks decision making into phases.
At least every 50ms, the architecture goes through the phases of 1) proposing different options for (changes to) its actions, selecting among those options, and determining how to actually do the thing it selected. Those are called “propose, decide, apply”.
That's the core loop of how Soar implements cognition.
Each of these phases are implemented as part of the architecture, but the specifics of how a particular agent proceeds through those phases depends on the knowledge it has. For example, let's consider an agent navigating a maze and pretend the agent hits an intersection. One way the agent could “propose” is it could propose movement to each branch at the intersection. What to “select”? Well, I can imagine an agent without knowledge of the map of the maze simply wouldn't really have enough knowledge to select among those options. But, let's suppose the agent has a map. A classic form of of planning is to search through the implications of actions, as hypotheticals. “According to the map, what would happen if I went this way? Well, I'd face this next intersection. What if I go that way after that?” (This is just classic search-based planning.)
The way Soar affords this kind of search is through “subgoaling”. When an agent has multiple options and doesn't know what to pick, it creates a subgoal. This can be recursive (allowing the agent to chain hypothetical situations). Soar isn't limited to classic search-based planning, but hopefully it's illustrative.
More generally, Soar creates subgoals when it doesn't have the knowledge it needs to proceed through a phase of cognition.
From “Universal Subgoaling and Chunking: The Automatic Generation and Learning of Goal Hierarchies” page 33: “Universal subgoaling is the ability of the agent to set up subgoals for all possible difficulties that it can face. Although these difficulties can arise in an indefinite number of different contexts, the taxonomy of difficulties must be a small and well-defined set.”
It might not have sufficient options, might not know what to pick, or might not know how to implement/apply what it selected. That's the taxonomy. Because Soar represents procedural memory as being composed of knowledge for these phases, we claim it has “universal subgoaling”. In other words, no matter what procedural knowledge it is missing, it has a way to create a goal of learning it.
Now, that's all well and good, but knowing what knowledge you're missing and being able to actually learn it are two different things. I provided the example of search-based planning as one way to learn in response to a lack of knowledge for what to select. We've implemented A* search, means-ends analysis, iterative-deepening search, all sorts of stuff for learning in response to subgoals from that phase.
Stepping back though, if cognition usually means proceding through those phases, we can conceive of metacognition as processing about one's ability to proceed through those phases.
Coming back to search-based planning, a classic difficulty for forward search in cluttered environments is a huge branching factor. There's just too many options to consider in a reasonable amount of time and none clearly make progress to a goal. Maybe in some situations, such as navigation, a forward search like A* search is useful, but it's not great for determining the steps to clean the kitchen. An agent has all sorts of affordances most of the time. Instead, search backwards from the goal(s) like “clean dishes in the dish rack” might be more efficient. The agent need not even consider some actions like walking into another room becasue they aren't relevant to its current goals. This is a case where means-ends analysis planning is much more efficient for determining how to achieve the goal. (Working backwards from the goal, you quickly identify placing a dried dish, drying a dish, cleaning a dish.) As agent designers, we could encode a heuristic based on branching factor or perhaps the type of goal to select the form of planning.
But, alternatively, let's suppose this agent itself is faced with the problem of not knowing what problem solving method to employ. Well, here's what the agent's cognitive state might look like as a stack: clean the kitchen <- determine next step <- determine how to determine the next step. The agent could have a subgoal of determining which way to think about the problem, and this can arise naturally from Soar's subgoaling.
This means that in some cases, we can implement agents with metacognition in Soar. So, in an engineering sense, Soar supports metacognition in a natural way just as an extension to how it already implements cognition. The implementation is literally the agent thinking about its thinking (insomuch as you are willing to conceive of its cognitive cycle as thought). I just think that's neat.
I've been focusing on what happens when an agent does not have enough knowledge in the “decide” phase (can't pick between options). I don't have metacognition worked out, by any means, but I can imagine similar stories for the other phases. And, frankly, this is a blog post with me spitballing, but I'm just genuinely pleasantly surprised by how naturally this sort of thing seems supported in Soar.
