Wave

thinking about parallel computing for thinking

scijones

Soar is a “cognitive architecture”. It attempts to implement the necessary components to support general intelligence, but it's not like crypto/AI/techbro hype. It's a decades long academic research project, and it has been useful for some robotics and cognitive modeling.

Soar is single core.

With increasing availability of parallel computing, it may seem odd that an architecture that attempts to implement the basis for intelligence is largely single core.

While aspects of our code-base could be parallelized for some performance gain, it seems that some serial bottleneck is inescapable and appears humanlike, at least for complex decision-making. This got me wondering whether underlying constraints could exist to demand that this must be the case.

This led me to thinking about a swarm intelligence and about whether an intelligence embodied in a swarm would face the same constraints. From what I can tell, in such an intelligence, you would want the distributed knowledge in the swarm to have a guarantee of “eventual consistency” so that the swarm can be considered coherent in terms of having knowledge that it can apply to goals. (Otherwise, you really end up with a complex system that isn't really a single “mind”. Not that there's anything wrong with that, but I'm kind of defining it as “out of scope” for what I'm talking about.)

So, I wonder, would imposing a requirement of eventual consistency to a hivemind's distributed knowledge still impose some form of amdahl's law bottleneck? Well, the PAC theorem and PACELC theorem end up suggesting to me that if you really really want consistency, then you end up reducing the availability of knowledge and increasing the latency with which knowledge is available. But, you don't have to do this. You can still keep knowledge highly-available at a low latency, but you introduce a need to “fix” things later, sometimes. So, a hivemind would likely be able to “splinter” and still act rapidly, but if it has eventual consistency, there must eventually be some repair process that makes each part of the swarm agree on some observed sequence of events.

From wikipedia: “The default versions of DynamoDB, Cassandra, Riak and Cosmos DB are PA/EL systems: if a partition occurs, they give up consistency for availability, and under normal operation they give up consistency for lower latency.” I wonder in these systems, in practice in real environments, to what degree the processing of “Tombstones” https://en.wikipedia.org/wiki/Tombstone_(data_store) imposes an amdahl's law style serialization. Tombstones are basically old deletes that need to be processed to actually guarantee eventual consistency and they introduce additional computational cost. Dealing with tombstones is called “compaction”.

And all of this explains why, before I've even had my second coffee, I'm now considering reading this paper:

Fast Compaction Algorithms for NoSQL Databases

Let's hope this goes somewhere!

About the Author

scijones

I like to write highly speculative AI and CS ideas.

Mia Rose WinterReviewer

This might also interest you

The Quest for Ethical AI: Actually saving time with generated commit message bodies

Mia Rose Winter 11/11/2025

The Total Hatred For AI in Tech If you have existed on the planet earth in the last 36 months you have been undoubtedly been exposed to a slew of AI tools and integrations, half of which are questionably executed and the other half is questionable if it even is AI. With all of that, coming right off of the crypto boom especially the tech-savvy have immediately questioned this hype and over the months grew to hate it with a fury. I do not except myself from that, I was there. For the first months what I previously followed as promising new tech got turned on its head overnight by capitalist pieces of shit at openAI and friends and completely soured my mood for anything that has proclaimed itself AI, and I myself got caught in the rumor hate mill: AI uses 200 quadrillion times more power than a google search, AI uses oceans of water, we need to double data centers because of AI, AI will kill us all, AI stolen my bicycle. As I do not like blindly hating and I also started to distrust how

AITutorial

hopefully you'd implement metacognition as cognition about cognition, right?

scijones 3/24/2024

For this one, I'm talking about the Soar Cognitive Architecture as AI engineering software, and not so much about it in relation to humans. This software attempts to support cognition. Core to this is a procedural memory that breaks decision making into phases. At least every 50ms, the architecture goes through the phases of 1) proposing different options for (changes to) its actions, selecting among those options, and determining how to actually do the thing it selected. Those are called “propose, decide, apply”. That's the core loop of how Soar implements cognition. Each of these phases are implemented as part of the architecture, but the specifics of how a particular agent proceeds through those phases depends on the knowledge it has. For example, let's consider an agent navigating a maze and pretend the agent hits an intersection. One way the agent could “propose” is it could propose movement to each branch at the intersection. What to “select”?

AIOpinionInfodump

Spreading activation, a quaint and useful cogsci AI idea

scijones 2/27/2024

Some older cogsci AI ideas can seem quaint in the modern AI practice. One of those that is still relevant, but just plain simple in the face of modern models, is spreading activation. Imagine modeling human knowledge literally as a labeled directed graph. You can actually model some interesting aspects of human cognition with simple algorithms defined over such a graph. Spreading activation is that kind of thing. There was this notion that we could implement semantic memory knowledge with a graph. Imagine embodied concepts of colors as nodes, words also being separate nodes, and edges that also have labels connecting things together. So, you could have a mental representation of “red” the color, an associated name for the color that is actually the string “red”, and an edge from the actual color concept to the string label called “label” or “name”. We're talking that level of just throwing things into a graph and saying “sure, that'

AIInfodump
Powered by Wave