Soar is a “cognitive architecture”. It attempts to implement the necessary components to support general intelligence, but it's not like crypto/AI/techbro hype. It's a decades long academic research project, and it has been useful for some robotics and cognitive modeling.
Soar is single core.
With increasing availability of parallel computing, it may seem odd that an architecture that attempts to implement the basis for intelligence is largely single core.
While aspects of our code-base could be parallelized for some performance gain, it seems that some serial bottleneck is inescapable and appears humanlike, at least for complex decision-making. This got me wondering whether underlying constraints could exist to demand that this must be the case.
This led me to thinking about a swarm intelligence and about whether an intelligence embodied in a swarm would face the same constraints. From what I can tell, in such an intelligence, you would want the distributed knowledge in the swarm to have a guarantee of “eventual consistency” so that the swarm can be considered coherent in terms of having knowledge that it can apply to goals. (Otherwise, you really end up with a complex system that isn't really a single “mind”. Not that there's anything wrong with that, but I'm kind of defining it as “out of scope” for what I'm talking about.)
So, I wonder, would imposing a requirement of eventual consistency to a hivemind's distributed knowledge still impose some form of amdahl's law bottleneck? Well, the PAC theorem and PACELC theorem end up suggesting to me that if you really really want consistency, then you end up reducing the availability of knowledge and increasing the latency with which knowledge is available. But, you don't have to do this. You can still keep knowledge highly-available at a low latency, but you introduce a need to “fix” things later, sometimes. So, a hivemind would likely be able to “splinter” and still act rapidly, but if it has eventual consistency, there must eventually be some repair process that makes each part of the swarm agree on some observed sequence of events.
From wikipedia: “The default versions of DynamoDB, Cassandra, Riak and Cosmos DB are PA/EL systems: if a partition occurs, they give up consistency for availability, and under normal operation they give up consistency for lower latency.” I wonder in these systems, in practice in real environments, to what degree the processing of “Tombstones” https://en.wikipedia.org/wiki/Tombstone_(data_store) imposes an amdahl's law style serialization. Tombstones are basically old deletes that need to be processed to actually guarantee eventual consistency and they introduce additional computational cost. Dealing with tombstones is called “compaction”.
And all of this explains why, before I've even had my second coffee, I'm now considering reading this paper:
Fast Compaction Algorithms for NoSQL Databases
Let's hope this goes somewhere!
