Artificial General Intelligence

Rationality and Validity

1. Different working environments

For intelligence to be specified as the ability of "doing the right thing", some assumptions about the system's living/working environment must be made first. Concretely, "the assumption of insufficient knowledge and resources" (AIKR) means: For a reasoning system, it means Obviously, classical logic cannot be used in such an environment, and nor can the other non-classical logics, since their preconditions are not satisfied. On the other hand, it is the normal environment for intelligent systems.

2. Induction as an example

Example: How to predict the next number if the observed sequence is 1, 2, 3, 4? Or, how to predict the color of the next raven if 100 ravens have been observed, and all of them are black?

Francis Bacon: You can find a regularity by induction.

David Hume: You cannot, unless the future is just like the past, but how can you know that? Induction is just a mental habit.

Karl Popper: There is no such a thing as "induction" in science. You can only falsify a general statement, but never verify it.

Rudolf Carnap: At least we can evaluate the probability of a hypothesis, under certain assumptions.

Thomas Bayes: Degrees of belief should be treated as probability (Dutch Book Arguments), and belief revision as conditional probability calculation. Induction is included as a special case.

Ray Solomonoff: Assuming the environment follows some unknown but computable probability distribution, the optimal prediction can be determined by giving simpler hypotheses (that fit the observation) higher probability.

3. Validity in reasoning

Most of the above solutions to the induction problem still assume that the validity of inference must be justified by the (at least probabilistic) correctness of its conclusions with respect to future observations.

However, as Hume pointed out, no such guaranty can be provided without circular reasoning. Furthermore, we have reason to believe that the future is different from the past. Consequently, even a "scientific prediction" is fallible. Therefore, for an open system "valid" cannot mean infallible, otherwise validity is impossible.

Similarly, from given (finite) observations, we cannot decide whether a sequence follows a probability distribution or not (not to mention which one). Furthermore, under the knowledge-resource restriction, we cannot treat the observation as a binary string produced by a Turing Machine, but have to describe them using concepts at much higher levels of description (reductionism cannot be accepted here). To exhaustively compare all possible explanations is never an option, nor is the requirement of fully fitting the data.

On the other hand, validity still makes sense here, since intuitively there are "right" and "wrong" things to do. Also, there are reasons to believe the optimum of the mind, with respect to the options available in the evolution history.

Under AIKR, the right thing to do is to adapt, i.e., to adjust oneself to the environment as if the future is just like the past. It is the best strategy because it works in relatively stable environments, and if the environment cannot be adapted to, no strategy will work anyway.

Concretely speaking, it means a conclusion must be justified according to the system's past experience, rather than its future experience, or the environment as it is.

Under the restriction of resources, the system cannot take the whole past experience into account when evaluating a conclusion. Instead, it can only depend on the evidence collected from the past using available resources. Here, validity means the most efficient strategy in resource allocation among the tasks, for the system as a whole.

In summary, there are multiple standards of rationality or validity, each of which is applicable to a certain type of system. In a non-axiomatic system, a "valid" (or "rational", "reasonable") conclusion is one that is supported by available evidence.

You can compare the above answer to that of a representative AI textbook.

4. Intelligence redefined

A new working definition for "intelligence": adaptation with insufficient knowledge and resources.

Adaptation: using the past to predict the future; using the finite supply to meet the potentially infinite demand.

Insufficient knowledge and resources: finite, open, real-time.

Similar ideas:

What is new: concrete, comprehensive, and constructive. Implications:


Reading