Any doubts that computers can do natural language processing ended dramatically yesterday as IBM’s Watson computer defeated the world’s two best players in the TV quiz show Jeopardy. Although the task was constrained, it clearly shows that it won’t be too long before we’ll have computers that can understand most of what we say. This Nova episode gives a nice summary of the project. A description of the strategy and algorithms used by the program’s designers can be found here.

I think there are two lessons to be learned from Watson. The first is that machine learning will lead the way towards strong AI. (Sorry Robin Hanson, it won’t be brain emulation). Although they incorporated “hard coded” algorithms, the engine behind Watson was supervised learning from examples. The second lesson is that we may already have all the algorithms to get there. The Watson team didn’t have to invent any dramatically new algorithms. What was novel is the way they integrated many existing algorithms. This is analogous to what I called the Hopfield Hypothesis in that we may already know enough biology to understand how the brain works. What we don’t understand yet is how these elements combine.

Addendum: Here is a YouTube link for the show last night.

### Like this:

Like Loading...

*Related*

This entry was posted on February 17, 2011 at 11:52 and is filed under Computational neuroscience, Computer Science, Neuroscience, Statistics, Technology. You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.

February 17, 2011 at 12:39

Of course this has the Singularitarians all in a twist but I am not impressed.

I wasn’t impressed by Big Blue either, all this is just a PR coup for IBM.

Semantic entailment + supervised learning + a lot of data does not make for intelligence.

The “real problem” for AI is a matter of knowledge representation, not so much a matter of induction/deduction/abduction a la Monica Anderson.

Because…. it would be induction/deduction/abduction upon WHAT?

February 17, 2011 at 23:28

Prof. Lipton and prof. Reagan, you should definitely get to know prof. Doron Zeilberger, or at least read some of his Opinions (http://www.math.rutgers.edu/~zeilberg/OPINIONS.html). At the very least read this: http://www.math.rutgers.edu/~zeilberg/mamarim/mamarimPDF/priced.pdf. He has for a very long time believed in computer generated and computer assisted proofs, even going as far as saying that it is a matter of time until human mathematicians will all become obsolete (and re-qualify as programmers which he claims is a much more interesting and elucidating exercise). Btw, besides proving some landmark results in combinatorics, and doing that entirely as a human, he publishes papers co-authored with his computer Shalosh B. Ekhad. Also he has advanced the state of the art in computer generated proofs by a big leap: the WZ algorithm, developed with Herb Wilf, reduces the proof of pretty much any hypergeometric identity to checking an easy to verify (by computer) proof certificate. One example of its power: it can prove a finite version of the Rogers-Ramanujan identity, a very difficult and deep result. (Btw prof. Z does not like infinities either)

Moreover, prof. Zeilberger envisions a world of semi-rigorous mathematics and by that he pretty much implies a world of *practical* probabilistic checking of proofs, where the proofs are computer derived too (but using a human algorithm — another reminder that everything is algorithms). Here is a quote:

“I envision an abstract of a paper, c. 2100, that reads “We show that the Goldbach conjecture is true with probability 0.99999 and that its complete truth could be determined with a budget of $10 billion”.

While prof Z.’s statements are exaggerated, his ideas are interesting. While reading him, keep in mind that he often oversells his case intentionally just to make an impression (he quotes a Jewish saying to the effect of “if you don’t ask for a $100 you won’t even get $50″). While his statements about humans becoming obsolete as mathematicians should be taken with a grain of salt,he does believe that there is more space for computer-automated proofs. That humans should be moving more towards the big-picture and use computers more extensively, for “computations” which are not only numerical, and moreover, which automate not just routine symbolic manipulations but also whole patterns of reasoning and proof derivation.

Another quote I could not resist making: “The identity 2+2=4 may seem trivial nowadays but was very deep when it was discovered independently by several cavedwellers. It is a general and abstract theorem that contains, as special cases, many apparently unrelated theorems: Two bears and two bears makes four bears; two apples and two apples makes four apples. It was also realized that, in order to prove it rigorously, it suffices to prove it for any special case.”