December 23, 2010

Research focus

In 1976 John McCarthy, one of the founders of artificial intelligence, wrote a memo discussing the problem of getting a computer to understand a story from the New York Times:
"A 61-year old furniture salesman, John J. Hug, was pushed down the shaft of a freight elevator yesterday in his downtown Brooklyn store by two robbers while a third attempted to crush him with the elevator car because they were dissatisfied with the $1,200 they had forced him to give them."
McCarthy suggested that a real understanding of this story would entail being able to answer questions like:
Who was in the store when the events began?
Who had the money at the end?
Did Mr. Hug know he was going to be robbed?
Does he know now that he was robbed?
Answering these questions is still beyond the state of the art in natural language processing, because they require common sense knowledge in addition to the text of the story. In fact the problems associated with answering questions only based on the text of the story are only beginning to be solved on a large scale:
When and where was Mr. Hug pushed?
Who forced who to give $1,200 to whom?
Did the money satisfy the robbers?
To achieve an understanding at this level, we need to address linguistic problems like word sense disambiguation ("push" has 15 senses), named entity recognition (Mr. Hug = John J. Hug), anaphora resolution (him = John J. Hug), parsing (who did what to whom?), semantic relation identification (dissatisfied with $1,200 = $1,200 did not satisfy). The figure below illustrates the main challenge: the ambiguity present in most natural language expressions. Our group studies statistical machine learning methods to address these problems with the eventual goal of natural language understanding by machines.

Different interpretations of the sentence: "I saw the man on the hill with a telescope."

Full post...