This page has been moved to here.

Artificial General Intelligence

— A gentel introduction

Pei Wang

[Last Update: June 23, 2013]


1. From AI to AGI

1.1. AI: the up, the down, and the turn

Artificial Intelligence (AI) started with "thinking machine" of human-comparable intelligence as the ultimate goal, as documented by the following literature: In the past, there were some ambitious projects aiming at this goal, though they all failed. The best-known examples include the following ones: Partly due to the realized difficulty of the problem, in the 1970s-1980s mainstream AI gradually moved away from general-purpose intelligent systems, and turned to domain-specific problems and special-purpose solutions, though there are opposite attitudes toward this change: Consequently, the field currently called "AI" consists of many loosely related subfields without a common foundation or framework, and suffers from an identity crisis:

1.2. A renaissance started

Roughly in the period of 2004 to 2007, calls for research on general-purpose systems returned, both inside and outside mainstream AI.

Anniversaries are good time to review the big picture of the field. In the following collections and events, many well-established AI researchers raised the topic of general-purpose and human-level intelligence:

More or less coincidentally, from outside mainstream AI, there are several books with bold titles and novel technical plans to produce intelligence as a whole in computers: There are also several less technical but more influential books, with the same optimism on the possibility of building general-purpose AI: So after several decades, "general-purpose system", "integrated AI", and "human-level AI" become less forbidden (though far from popular) topics, as reflected by several recent meetings (an incomplete list):

1.3. Recent developments

Emerging research communities: New books:


2. AGI Overview

2.1. What is Artificial General Intelligence (AGI)

AGI research treats "intelligence" as a whole. Therefore, "AGI" is closer to the orginal meaning than to the current meaning of "AI". Concepts similar to AGI include "strong AI", "human-level AI", "complete AI", "thinking machine", and many others.

AGI differs from mainstream AI in the following points:

AGI research has a science (theory) aspect and an engineering (technique) aspect. A complete AGI work normally includes
  1. a theory of intelligence,
  2. a formal model of the theory,
  3. a computational implementation of the model.
The book chapter "Aspects of Artificial General Intelligence" clarifies the notion of AGI, and respondes to the following common doubts and objections of this research:

2.2. Fundamental AI/AGI questions

The most general theoretical questions every AI (AGI) researcher needs to answer include:
  1. What is AI, accurately specified?
  2. Is it possible to build the AI as specified?
  3. If AI is possible, what is the most plausible way to achieve it?
  4. Even if we know how to achieve AI, should we really do it?
My own answers to these questions are summarized here.

Most AI (AGI) researchers answer "Yes" to the 2nd and 4th questions, though some outside people say "No" to one of them. In the following we will compare the different answers to the 1st and 3rd questions, which are about the research goal and technical strategy of AI (AGI), respectively.

2.3. Research objectives

In the field of AI/AGI, there are different research objectives, corresponding to different understandings (working definitions) of intelligence, in terms of where the similarity between the brain and the computer should be expected: Reason for the diversity: human intelligence is described at a certain level of abstraction (the science aspect of AI), and then the description is used as the objective to be achieved (the engineering aspect of AI).

The above objectives are related, but still very different, and do not subsume each other. The preferred way to achieve one is not preferred for the others.

2.4. Overall strategies

On one hand, the ultimate goal of AI is to reproduce intelligence as a whole, while on the other hand, engineering practice must be step-by-step. There are three overall strategies: The selection of strategy partially depends on the selection of the objective.

2.5. Major techniques

The major techniques in AGI projects include, though not limited to: Though each of these techniques is also explored in mainstream AI, to use it in a general-purpose system leads to very different design decisions in technical details.


3. Representative AGI Projects

The following projects are selected to represent existing AGI research, because each of them (1) is clearly oriented to AGI, (2) is still very active, and (3) has ample publications of technical details.

Each project name is linked to the project website, where the following quotations are extracted. The focus of the quotations is on the research goal (the 1st question) and technical path (the 3rd question). Two publications on the project are selected, one brief introduction and one detailed description.

Soar [A Gentle Introduction to Soar; The Soar Cognitive Architecture]

The ultimate in intelligence would be complete rationality which would imply the ability to use all available knowledge for every task that the system encounters. Unfortunately, the complexity of retrieving relevant knowledge puts this goal out of reach as the body of knowledge increases, the tasks are made more diverse, and the requirements in system response time more stringent. The best that can be obtained currently is an approximation of complete rationality. The design of Soar can be seen as an investigation of one such approximation.

For many years, a secondary principle has been that the number of distinct architectural mechanisms should be minimized. Through Soar 8, there has been a single framework for all tasks and subtasks (problem spaces), a single representation of permanent knowledge (productions), a single representation of temporary knowledge (objects with attributes and values), a single mechanism for generating goals (automatic subgoaling), and a single learning mechanism (chunking). We have revisited this assumption as we attempt to ensure that all available knowledge can be captured at runtime without disrupting task performance. This is leading to multiple learning mechanisms (chunking, reinforcement learning, episodic learning, and semantic learning), and multiple representations of long-term knowledge (productions for procedural knowledge, semantic memory, and episodic memory).

Two additional principles that guide the design of Soar are functionality and performance. Functionality involves ensuring that Soar has all of the primitive capabilities necessary to realize the complete suite of cognitive capabilities used by humans, including, but not limited to reactive decision making, situational awareness, deliberate reasoning and comprehension, planning, and all forms of learning. Performance involves ensuring that there are computationally efficient algorithms for performing the primitive operations in Soar, from retrieving knowledge from long-term memories, to making decisions, to acquiring and storing new knowledge.

ACT-R [An Integrated Theory of the Mind; The Atomic Components of Thought]
ACT-R is a cognitive architecture: a theory for simulating and understanding human cognition. Researchers working on ACT-R strive to understand how people organize knowledge and produce intelligent behavior. As the research continues, ACT-R evolves ever closer into a system which can perform the full range of human cognitive tasks: capturing in great detail the way we perceive, think about, and act on the world.

On the exterior, ACT-R looks like a programming language; however, its constructs reflect assumptions about human cognition. These assumptions are based on numerous facts derived from psychology experiments. Like a programming language, ACT-R is a framework: for different tasks (e.g., Tower of Hanoi, memory for text or for list of words, language comprehension, communication, aircraft controlling), researchers create models (aka programs) that are written in ACT-R and that, beside incorporating the ACT-R's view of cognition, add their own assumptions about the particular task. These assumptions can be tested by comparing the results of the model with the results of people doing the same tasks.

ACT-R is a hybrid cognitive architecture. Its symbolic structure is a production system; the subsymbolic structure is represented by a set of massively parallel processes that can be summarized by a number of mathematical equations. The subsymbolic equations control many of the symbolic processes. For instance, if several productions match the state of the buffers, a subsymbolic utility equation estimates the relative cost and benefit associated with each production and decides to select for execution the production with the highest utility. Similarly, whether (or how fast) a fact can be retrieved from declarative memory depends on subsymbolic retrieval equations, which take into account the context and the history of usage of that fact. Subsymbolic mechanisms are also responsible for most learning processes in ACT-R.

LIDA [The Lida Architecture; LIDA Toturial]
Implementing and fleshing out a number of psychological and neuroscience theories of cognition, the LIDA conceptual model aims at being a cognitive "theory of everything." With modules or processes for perception, working memory, episodic memories, "consciousness," procedural memory, action selection, perceptual learning, episodic learning, deliberation, volition, and non-routine problem solving, the LIDA model is ideally suited to provide a working ontology that would allow for the discussion, design, and comparison of AGI systems. The LIDA technology is based on the LIDA cognitive cycle, a sort of "cognitive atom." The more elementary cognitive modules play a role in each cognitive cycle. Higher-level processes are performed over multiple cycles.

The LIDA architecture represents perceptual entities, objects, categories, relations, etc., using nodes and links .... These serve as perceptual symbols acting as the common currency for information throughout the various modules of the LIDA architecture.

SNePS [The GLAIR Cognitive Architecture; SNePS Tutorial]
The long term goal of the SNePS Research Group is to understand the nature of intelligent cognitive processes by developing and experimenting with computational cognitive agents that are able to use and understand natural language, reason, act, and solve problems in a wide variety of domains.

The SNePS knowledge representation, reasoning, and acting system has several features that facilitate metacognition in SNePS-based agents. The most prominent is the fact that propositions are represented in SNePS as terms rather than as logical sentences. The effect is that propositions can occur as arguments of propositions, acts, and policies without limit, and without leaving first-order logic.

Cyc [Cyc: A Large-Scale Investment in Knowledge Infrastructure; Building Large Knowledge-Based Systems]
Vast amounts of commonsense knowledge, representing human consensus reality, would need to be encoded to produce a general AI system. In order to mimic human reasoning, Cyc would require background knowledge regarding science, society and culture, climate and weather, money and financial systems, health care, history, politics, and many other domains of human experience. The Cyc Project team expected to encode at least a million facts spanning these and many other topic areas.

The Cyc knowledge base (KB) is a formalized representation of a vast quantity of fundamental human knowledge: facts, rules of thumb, and heuristics for reasoning about the objects and events of everyday life. The medium of representation is the formal language CycL. The KB consists of terms -- which constitute the vocabulary of CycL -- and assertions which relate those terms. These assertions include both simple ground assertions and rules.

AIXI [Universal Algorithmic Intelligence: A mathematical top->down approach; Universal Artificial Intelligence]
An important observation is that most, if not all known facets of intelligence can be formulated as goal driven or, more precisely, as maximizing some utility function.

Sequential decision theory formally solves the problem of rational agents in uncertain worlds if the true environmental prior probability distribution is known. Solomonoff's theory of universal induction formally solves the problem of sequence prediction for unknown prior distribution. We combine both ideas and get a parameter-free theory of universal Artificial Intelligence. We give strong arguments that the resulting AIXI model is the most intelligent unbiased agent possible.

The major drawback of the AIXI model is that it is uncomputable, ... which makes an implementation impossible. To overcome this problem, we constructed a modified model AIXItl, which is still effectively more intelligent than any other time t and length l bounded algorithm.

NARS [From NARS to a Thinking Machine; Rigid Flexibility: The Logic of Intelligence]
What makes NARS different from conventional reasoning systems is its ability to learn from its experience and to work with insufficient knowledge and resources. NARS attempts to uniformly explain and reproduce many cognitive facilities, including reasoning, learning, planning, etc, so as to provide a unified theory, model, and system for AI as a whole. The ultimate goal of this research is to build a thinking machine.

The development of NARS takes an incremental approach consisting four major stages. At each stage, the logic is extended to give the system a more expressive language, a richer semantics, and a larger set of inference rules; the memory and control mechanism are then adjusted accordingly to support the new logic.

In NARS the notion of "reasoning" is extended to represent a system's ability to predict the future according to the past, and to satisfy the unlimited resources demands using the limited resources supply, by flexibly combining justifiable micro steps into macro behaviors in a domain-independent manner.

OpenCog [An Overview of the OpenCogBot Architecture; Building Better Minds]
OpenCog, as a software framework, aims to provide research scientists and software developers with a common platform to build and share artificial intelligence programs. The long-term goal of OpenCog is acceleration of the development of beneficial AGI.

OpenCogPrime is a specific AGI design being constructed within the OpenCog framework. It comes with a fairly detailed, comprehensive design covering all aspects of intelligence. The hypothesis is that if this design is fully implemented and tested on a reasonably-sized distributed network, the result will be an AGI system with general intelligence at the human level and ultimately beyond.

While an OpenCogPrime based AGI system could do a lot of things, we are initially focusing on using OpenCogPrime to control simple virtual agents in virtual worlds. We are also experimenting with using it to control a Nao humanoid robot. See for some illustrative videos.

HTM [Hierarchical Temporal Memory; On Intelligence]
At the core of every Grok model is the Cortical Learning Algorithm (CLA), a detailed and realistic model of a layer of cells in the neocortex. Contrary to popular belief, the neocortex is not a computing system, it is a memory system. When you are born, the neocortex has structure but virtually no knowledge. You learn about the world by building models of the world from streams of sensory input. From these models, we make predictions, detect anomalies, and take actions.

In other words, the brain can best be described as a predictive modeling system that turns predictions into actions. Three key operating principles of the neocortex are described below: sparse distributed representations, sequence memory, and on-line learning.

A rough classification

The above AGI projects are roughly classified in the following table, according to the type of their answers to the previously listed 1st question (on research goal) and 3rd question (on technical path).

goal \ path










LIDA, OpenCog

SNePS, Soar













Since this classification is made at a high level, projects in the same entry of the table are still quite different in the details of their research goals and technical paths.

In summary, the current AGI projects are based on very different theories and techniques.