The general intelligence hypothesis

A friend of mine recently proposed the General Intelligence Hypothesis (GIH), which is a restatement of an old idea, and which has both a weak and a strong version.

The Weak General Intelligence Hypothesis (WGIH) says that although human brains may have many modules specialized by evolution for specific tasks, we also have an (admittedly still mysterious) general reasoning capability that allows us to excel in an extremely wide range of tasks not explicitly encountered in the evolutionary environment. This capability may be based on relatively simple principles, and it may be easier to design, learn, or evolve a similar ability in software than it would be to build systems for each of the domains we care about.

The Strong General Intelligence Hypothesis (SGIH) takes this a step further and posits that this general capability is actually necessary for most tasks we care about, and that we should expect limited progress on most important problems until we successfully build systems that can wield this power.

The mandatory question to ask when encountering a hypothesis like this is: what could disprove it? I think the SGIH would be disproved by a system that could solve all formally-specified problems that expert humans could solve given enough time (which includes proving advanced mathematical theorems and synthesizing large software systems), but that even after a year of additional engineering could not be made to pass a Turing or Feigenbaum test.

Although the SGIH will not be practically falsifiable for the foreseeable future, I think it offers a satisfying explanation for what I consider the central empirical finding of the first 60 years of AI research: that despite several mastered domains and many useful technologies, an ocean of complexity still lurks beneath almost every seemingly innocuous task, and our attempts to match humans have mostly yielded systems that are woefully brittle or degenerate or both.

The natural follow-up questions are how are humans so incredibly good at so many things that were not present in the ancestral environment? and how did these abilities develop so suddenly in early humans? Evolution could not possibly have solved all the immensely hard problems AI researchers have been trying to solve. The WGIH offers a relatively satisfying explanation: evolution stumbled on general intelligence and the rest took care of itself.

Future posts will discuss subtleties of the general intelligence hypothesis, and will consider its relevance and implications in more detail.


QuickSpec and the quest for good lemmas

9 minute read

Given a fixed set of proofs, it is natural to consider a lemma to be good to the extent it enables compression of the proofs. But how might we know if a lemm...

The general intelligence hypothesis

4 minute read

A friend of mine recently proposed the General Intelligence Hypothesis (GIH), which is a restatement of an old idea, and which has both a weak and a strong v...

Search vs meta-search

8 minute read

Consider the following puzzle, from Martin Gardner’s My Best Mathematical and Logic Puzzles (1994):

Mathematics: our overlooked ability

3 minute read

Suppose you have a list of 25 characters, 13 of which are ‘a’, and no character ever appears twice in a row. What can you tell me about the list?

Intelligence as software engineering

6 minute read

Humans can learn to become competent at playing board games such as Chess and Go, but humans are phenomenally excellent at learning how to play such games qu...

Abstractions in mathematics

5 minute read

The word “abstraction” is vague in most contexts within the AI community. Has AlphaGo learned abstractions? I don’t think the question is relevant. AlphaGo ...

Issues with AlphaZero

6 minute read

The AlphaZero algorithm developed by David Silver, Julian Schrittwieser, Karen Simonyan and others at DeepMind has proved very successful in the board games ...

Back to top ↑