## Neural networks, Occam’s razor, and heuristic search

Neural networks sacrifice Occam’s razor in exchange for the ability to efficiently find parameters that fit the training data. This may be a fantastic trade...

Suppose you have a list of 25 characters, 13 of which are ‘a’, and no character ever appears twice in a row. What can you tell me about the list?

Most readers will instantly know that the list starts and ends with ‘a’, and that it alternates between ‘a’ and characters besides ‘a’. This inference may seem mundane to some of you, yet despite half a century of developing systems for automated and interactive theorem proving, and of designing libraries of definitions, lemmas, and tactics just for reasoning conveniently about lists, it is still surprisingly difficult to construct a formal proof of this basic fact. We humans are *so* good at this type of reasoning that it is easy to overlook how sophisticated our abilities really are.

I have spent a good deal of time trying to formalize elementary mathematics and computer science textbooks in the Lean Theorem Prover, and in trying to extend and improve Lean to make the process easier. I have been able to translate entire chapters of several textbooks into Lean in a natural way, with every line of Lean seemingly isomorphic to the informal presentation. However, once in a while I will hit a statement or proof step that may seem simple to me but that requires a major refactor, or adding new features to Lean itself, or just seems like a brick wall. My brain is able to perform massive refactorings of mathematical knowledge and abstractions, synthesize the equivalent of tens of thousands of lines of tricky and performance-critical software, and maybe even expand the logic I am effectively operating in, all in the blink of an eye.

Our field has little appreciation for these abilities, no explanation of them, and no plausible path to engineering anything like it.

Deep learning doesn’t move the needle at all on these problems.

Neural networks sacrifice Occam’s razor in exchange for the ability to efficiently find parameters that fit the training data. This may be a fantastic trade...

Given a fixed set of proofs, it is natural to consider a lemma to be good to the extent it enables compression of the proofs. But how might we know if a lemm...

Many AI researchers (myself included) are optimistic that stochastic gradient descent (SGD) can somehow be leveraged to help learn sophisticated computer pro...

A friend of mine recently proposed the General Intelligence Hypothesis (GIH), which is a restatement of an old idea, and which has both a weak and a strong v...

Consider the following puzzle, from Martin Gardner’s My Best Mathematical and Logic Puzzles (1994):

Suppose you have a list of 25 characters, 13 of which are ‘a’, and no character ever appears twice in a row. What can you tell me about the list?

Humans can learn to become competent at playing board games such as Chess and Go, but humans are phenomenally excellent at learning how to play such games qu...

The word “abstraction” is vague in most contexts within the AI community. Has AlphaGo learned abstractions? I don’t think the question is relevant. AlphaGo ...

The AlphaZero algorithm developed by David Silver, Julian Schrittwieser, Karen Simonyan and others at DeepMind has proved very successful in the board games ...