Sunday, February 20, 2011

Fear and Loathing of the Day the Machines Take Over

Here are some bits from an article in The Atlantic magazine about the recent man vs. machine contest when IBM's Watson took on the to greatest Jeopady! contestants and ground them into the dust:
Oh, that Ken Jennings, always quick with a quip. At the end of the three-day Jeopardy! tournament pitting him and fellow human Brad Rutter against the IBM supercomputer Watson, he had a good one. When it came time for Final Jeopardy, he and Rutter already knew that Watson had trounced the two of them, the best competitors that Jeopardy! had ever had. So, on his written response to a clue about Bram Stoker, the author of Dracula, Jennings wrote, "I, for one, welcome our new computer overlords."

Now, think about that sentence. What does it mean to you? If you are a fan of The Simpsons, you'll be able to identify it as a riff on a line from the 1994 episode, "Deep Space Homer," wherein clueless news anchor Kent Brockman is briefly under the mistaken impression that a "master race of giant space ants" is about to take over Earth. "I, for one, welcome our new insect overlords," Brockman says, sucking up to the new bosses. "I'd like to remind them that as a trusted TV personality, I can be helpful in rounding up others to toil in their underground sugar caves."

Even if you're not intimately familiar with that episode (and you really should be), you might have come across the "Overlord Meme," which uses Brockman's line as a template to make a sarcastic statement of submission: "I, for one, welcome our (new) ___ overlord(s)." Over on Language Log, where I'm a contributor, we'd call this kind of phrasal template a "snowclone," and that one's been on our radar since 2004. So it's a repurposed pop-culture reference wrapped in several layers of irony.

But what would Watson make of this smart-alecky remark? The question-answering algorithms that IBM developed to allow Watson to compete on Jeopardy! might lead it to conjecture that it has something to do with The Simpsons -- since the full text of Wikipedia is among its 15 terabytes of reference data, and the Kent Brockman page explains the Overlord Meme. After all, Watson's mechanical thumb had beaten Ken and Brad's real ones to the buzzer on a Simpsons clue earlier in the game (identifying the show as the home of Itchy and Scratchy). But beyond its Simpsonian pedigree, this complex use of language would be entirely opaque to Watson. Humans, on the other hand, have no problem identifying how such a snowclone works, appreciating its humorous resonances, and constructing new variations on the theme.

All of this is to say that while Ken and Brad lost the battle, Team Carbon is still winning the language war against Team Silicon.


Those were two isolated gaffes in a pretty clean run by Watson against his human foes, but they'll certainly be remembered at IBM. For proof, see Stephen Baker's book Final Jeopardy, an engaging inside look at the Watson project, culminating with the Jeopardy! showdown in the final chapter. (In a shrewd marketing move, the book was available electronically without its final chapter before the match, and then the ending was given to readers as an update immediately after the conclusion of the tournament.) Baker writes:
As this question-answering technology expands from its quiz show roots into the rest of our lives, engineers at IBM and elsewhere must sharpen its understanding of contextual language. Smarter machines will not call Toronto a U.S. city, and they will recognize the word "missing" as the salient fact in any discussion of George Eyser's leg. Watson represents merely a step in the development of smart machines. Its answering prowess, so formidable on a winter afternoon in 2011, will no doubt seem quaint in a surprisingly short time.
Baker's undoubtedly right about that, but we're still dealing with the limited task of question-answering, not anything even vaguely approaching full-fledged comprehension of natural language, with all of its "nuance, slang and metaphor." If Watson had chuckled at that "computer overlords" jab, then I'd be a little worried.
I'm in the 1960s camp of AI. I believe that for machines to really be useful they need logic not pattern matching algorithms or neural networks. The problem with the latter is that they create fragile "experts" that can fail unexpectedly and which we cannot understand what they know and don't know. There was a classic neural network training for the military task of finding tanks in a scene. They tuned up the network and got some pretty good success, but in a new training set they discovered unexpected failures. What had gone wrong? The machine was finding shadows and not tanks. But the humans weren't aware of that. They simply fed it scenes and rewarded "correct guesses" and punished wrong guesses. They had no way of knowing what the underlying machine-learned "algorithm" really was "identifying".

I'm willing to let AI make use of the dumb algorithms, but unless the machine system includes some real intelligence, i.e. logic and concepts manipulated in a human-like way, there will never be a "team silicon" to threaten "team carbon".

No comments: