next up previous
Next: If a picture Up: Beyond the binding Previous: The practical problem

The imagination perception loop

I criticized Cyc for having only an explicit representation, and relying on deductive inference as the main computational machinery. How else is it possible to represent things and do inference?

To illustrate the point, I will give an example from Herbert Simon, commenting on visual imagery to Gazzaniga [Gazzaniga, 1985]: ``Imagine a rectangle. Draw a line from the top right-hand corner to the bottom left corner. Now draw a line from the middle of the diagonal to the bottom right corner. Now approximately one third of the distance from the top right corner along the top line, drop a perpendicular line down to the lower edge. How many lines do you intersect?''

Introspection is typically not considered valid scientific evidence in cognitive science. However if you felt like you were drawing the rectangle in your mental sketchpad while reading the question, it did not mislead you this time. There are conclusive brain imaging studies that show that some visual regions of the brain that are active during perception are also active during thinking and imagination.

Obviously the example was cooked up to fit exactly my point. Nevertheless it illustrates a radically different way of doing inference. You probably did not enter the first couple of assertions in an ontology, and then use resolution to get your answer. You simply used the reverse wiring in your brain to go from descriptions to sensations, and just ran your already existing perceptual machinery to look at the answer for you [Rao, 1995b].

This illustrates a powerful inference engine. To find out whether a flying bird touches ground, you might have used this machinery for a blink of a second to see that it does not. Maybe the picture was drawn for you while you were hearing the sentence. To ``deduce'' that the Pisa tower is an unbalanced structure, you can imagine your body tilted at that angle, and the balance sensors in your ears will tell you it is not very stable.

McDermott, in his review, points out that a computer system cannot effectively receive a representation of a piece of knowledge without an algorithm ready to process it with reasonable efficiency [McDermott, 1993]. He also adds in a footnote that if knowledge is not represented, but merely implicit in an efficient algorithm, this requirement will not come up.

The imagination-perception loop also illustrates the implicit representation. The fact that flying birds do not touch ground does not have to be represented anywhere in your knowledge. It is implicit in the procedures that convert the verbal description to a visual image, and the procedures which can look at the image and answer queries. Contrast this with Cyc, which has to link flying and touching ground with an explicit chain of declarative statements.

The number of facts you can deduce from what you know, is not restricted to the deductive closure of everything in your symbolic memory. You add to this the analogue information you have in the memories of your other representational systems. You further add the ability of one system being able to set up experiments to be run in another.

Actually, the number of facts Cyc can draw from its knowledge base (within practical time limits) is probably a lot smaller than the deductive closure. Inference is an expensive process, especially if you have so many rules. Deductions that require a few steps can be reached, but the search space grows exponentially as the number of steps increase. In contrast, the imagination-perception loop is constant time. You convert the description from one representation to another. You run the already optimized constant time visual machinery to look at the answer. You send it back to language. The equivalent number of deductive steps is irrelevant.

If you think that you are not using your imagination for some of these problems, don't worry. There is nothing wrong with you. First of all, simple facts like ``flying things don't touch the ground'' are probably cached in your symbolic memory, even if you had to use your perceptual machinery to figure them out in the first place. If you think you know the answer to the question ``If A is taller than B, and B is taller than C, is C taller than A?'', before you have to visualize anything, this just means that you have done similar inferences hundreds of times in your life, and they just became second nature to your symbolic system. After all, we are not arguing against any of the things Cyc does, or any of the representations Cyc uses. We are advocating the use of more representations, and more computational machinery.

Last year I built a chess program to try out a new move generation algorithm I designed. It was based on the idea of representing a lot of state instead of redoing the computation. I ended up using a data structure which was a two dimensional array of cells, which were connected to each other by doubly linked lists running in the horizontal and vertical directions, where each cell was actually a pushdown stack of structures. In the end I was proud to be the first person using a quadruple star operator in C. The point is, we are used to designing appropriate data structures and algorithms for our programs. It is hard to understand the resistance to using multiple frameworks when one is designing a mind.



next up previous
Next: If a picture Up: Beyond the binding Previous: The practical problem



Deniz Yuret
Tue Apr 1 21:26:01 EST 1997