The tl;dr
The design of machine intelligence ought to emulate the evolution and development of the most powerful reasoning engine known to us: The human brain.
Intelligence is embodied. Living in a world implies having a specialized model of the ecological niche that one inhabits.
Intelligence is also emergent. Sophisticated intelligent systems evolved to initially solve simple domain-specific problems, and then learned to combine these solutions to handle increasingly sophisticated problems.
In the human brain, this is demonstrated by neural reuse and the on-the-fly assembly and reassembly of neural processing units.
This explains why human reasoning, especially about novel situations, often involves reasoning by analogy and metaphor.
Accordingly, emergent reuse, reassembly, and analogical reasoning must be key features in the design of machine intelligence — and open a path towards the development of collaborative, superintelligent AI systems.
Intelligence is embodied
Intelligence is an embodied, emergent phenomenon — and at Noumenal Labs, we believe that machine intelligence should be designed and built accordingly. That is, we believe that the best design strategy for research and development of bona fide machine intelligence (and superintelligence) is to draw inspiration from the evolution and development of the human brain and from the analysis of human cognitive processes.
We begin from the embodied intelligence thesis or natural intelligence thesis, which states that naturally occurring intelligent systems, like us, must come to form specialized models of the world that they inhabit — that is, world models. This is the main result of information theoretic analysis of cognitive systems like brains, elucidated via Bayesian mechanics. The primary result of this analysis is that, if something persists in an environment, then it will look to an observer “as if” it represents or statistically models those parts of that environment with which it interacts. When applied to living organisms, Bayesian mechanics tells us that any organism that achieves homeostasis with its ecological niche must behave as if it had a model of that ecological niche. Otherwise it would not exist.
For example, simple unicellular organisms must behave as if they had a sufficiently robust model of the various chemical species in the molecular soup and the regularities that obtain there to persist over time. This is evidenced by their ability to adapt to a constantly changing and fluctuating environment. The transition to multicellular life extended the domain of concern from the chemical to the macroscopic word. This implied a transition from modeling the chemistry of the simple environment of unicellular life, to modeling the object centered, macroscopic physics of the world in which we live. Our understanding of this class of statistical regularities was then formalized by scientific investigation into Newton’s laws of motion, Maxwell’s equations, etc., and exploited via systems engineering. But the core concepts and capabilities that lead to the technological revolution, and which were ultimately enabled by scientific precision, were already present in our understanding of the macroscopic world in which we evolved.
The main take-away is that the atomic elements of embodied intelligence are the models of the objects that compose their inhabited environment and the relationships between those objects. These atomic elements are evolutionarily ancient models that arose as a necessary consequence of our ancestors’ and other creatures’ interactions with the physical systems of their environment. In turn, the enhanced capacities of sophisticated systems like human brains arose from the manner in which these basic constituents were composed and redeployed in new situations and ultimately formalized via scientific inquiry.
Intelligence is emergent
In nature, sophisticated forms of intelligence almost always emerge from the coordinated, collaborative activity of many specialized domain expert systems. Consider the human body. The intelligence of the body arises from the coordination of several specialized systems. We tend not to think of organs as intelligent, but they are domain experts: The heart is uniquely good at pumping blood, the immune system is a domain expert in its specific domain of immune response generation. The same is true of every other major organ system in the human body.
Perhaps the most singular example of emergent intelligence is that of the human brain. Notably, the abilities of the human brain comes from its modular architecture. The brain is, emphatically, not structured as a single monolithic feedforward neural network. Instead, it is a collection of highly specialized, intricately networked modules or regions. Individual regions are computationally specialized modules, meaning that they are little domain experts that are very good at solving a narrow range of specialist problems or performing a narrow range of specific computations that arise within a well defined domain. The computational power of our cortical architecture comes from its ability to dynamically link its computational resources in a context specific way.
Specifically, specialized brain regions learned to coordinate with each other to solve complex problems — and also learn from each other over developmental time. The principal mechanism that enables this is neural reuse, assembly, and reassembly implemented via sparse and dynamic connectivity (Anderson, 2015). Over the course of its evolution, the brain repurposed and recombined simpler, evolutionary more ancient models that it had previously acquired to tackle new problems in similar but different enough domains. The key insight is that brain regions are specialized computationally, not functionally — and they are assembled on the fly into functional units according to the specific demands of the situation. In other words, brain regions are not specialized for specific tasks, such as visual or auditory processing. Careful multivariate analysis instead reveals that brain regions are specialized to transform specific kinds of input patterns into specific kinds of output patterns, regardless of sensory modality. Thus, calling a brain region a “visual region” or an “auditory region” is a bit misleading. (See Anderson, 2015, for an in-depth discussion.) It is more accurate to view these transiently assembled local neural subsystems as canonical computational units of information processing in the brain that are recruited on the basis of their ability to enhance the execution of a given function. Thus, not only higher order cognitive processes, but really all cognitive processes are the result of rapid, dynamic reuse and novel composition of previously learned models.
Reasoning by analogy, metaphor, and example
At a cognitive level, this sheds light on why humans often reason by analogy, use metaphors, and appeal to examples when confronted with new situations. Our mental models are often — and intuitively — redeployed in new situations, enabling us to understand what we do not know in terms of what we do. Indeed, in humans, neural reuse and reassembly manifest in our ability to reason by analogy and metaphor, which consist precisely in repurposing pre-fit models, to address the demands of novel and unpredicted situations. We also use examples in the same manner, to make sense of a novel case by analogy with how we understand previously encountered cases. For instance, we say that the effect of spacetime curvature due to massive bodies is like a heavy ball stretching a rubber mat. Or we borrow from our natural human upright posture and gait the metaphors of ascent to heaven and descent into hell (Lakoff and Johnson, 2008). In all these cases, understanding phenomena in a new domain, or one less well understood, is made possible by deploying our understanding of a domain that we know intuitively. One could even argue that, in humans, essentially all reasoning about novel phenomena is in some sense necessarily reasoning by analogy to the physical systems we evolved to understand.
Crucially, reasoning by analogy is premised upon the existence of a rich set of models from which to form the analogies. These models must be object centered and relational because analogies are by definition relational. Of course, this begs the question, where do the models that enable analogical reasoning ultimately come from? If we take the perspective from Bayesian mechanics sketched out above, the atomic elements of thought must come from the world in which we learn to survive over evolutionary and developmental time — and that were reused and reassembled to suit our purposes in a changing world. Such macroscopic object centered, relational world models are precisely what is required to enable reasoning by analogy and metaphor use — grounded in macroscopic physics.
Implications for the design of machine intelligence
This leads us to a key point that should inform the design of machine intelligence. One of the crucial implications of the above is that the north star of research in development in the field of AI — artificial general intelligence (AGI) — is not achievable. This is not merely because it would be technically difficult to build such a system. It is because the very idea of general intelligence is hype or mythology. The idea of a monolithic, superintelligent system that can perform at human level or beyond on a general set of tasks is fundamentally flawed because there is no such thing as a monolithic general intelligence.
All intelligence is embodied and all intelligence that we are aware of is emergent. This suggests a path to the design of machines with what one might call “general” intelligence. In our view, such a machine would be composed of a large number of coordinated domain experts that can respond collaboratively to novel situations. Machine intelligence should be designed explicitly to enable and facilitate this kind of network architecture.
If not AGI, what then might be the north star of research and development in AI? At Noumenal Labs, we believe that emergent collaborative design is the critical design feature that will unlock super-intelligence at scale. That is, machine intelligence and the networks that they form should be explicitly designed to enable the novel reuse of available models to suit the demands of novel tasks and situations. And it should be designed to enable the reassembly of ready-to-hand models into hybrid super-expert models, which combine the understanding and actionable insights of several smaller domain specialist models — on the fly, in response to more complex tasks and new data. In other words, the design of machine intelligence must emulate the phylogeny and ontogeny of the human brain: It must also emerge from the bottom up, via the collaboration of specialized expert agents and models.