next up previous
Next: The historical roots Up: Modularity Previous: Epistemological level /

Microtheories

An orthogonal direction for modularity is using microtheories. A microtheory is an internally consistent collection of facts about a particular domain [Sowa, 1993]. For example the system can have a naive theory of physics, in addition to a more formal theory. They would be useful in different contexts. However they talk about the same concepts, and may include incompatible facts. Instead of trying to hold the whole knowledge base in synchrony, we can have independent microtheories floating, each of which is internally consistent.

Microtheories were introduced relatively late to Cyc. They are not mentioned in the book. Cyc already had a large number of assertions by the time the book was written. This means that the knowledge engineers managed to keep so many assertions consistent, which is remarkable. The microtheories are introduced in the midterm report. The contents of the ontology up to that point were collected in one default microtheory called MEM (most expressive microtheory). Most of the assertions at that point were in MEM. In recent reviews however, it is evident that microtheories started playing a central role in Cyc.

As a rough estimate of the current magnitude of knowledge in Cyc, there are more than 400,000 significant assertions of which less than 30,000 are rules for inference. There are over 500 microtheories defined [Whitten, 1996]. The size of the knowledge base has fluctuated over the years, in particular it has decreased in size when axioms have been generalized. In the midterm report (1990) Cyc was reported to have over a million assertions. The work being done on the Cyc knowledge base currently is largely in the form of the development of microtheories for topics at the level of transportation, human emotions, modern buildings and so on.

Of course microtheories have their own problems. After all Cyc was a decision to build a large knowledge base instead of a hundred expert systems. Dividing Cyc up into microtheories carries the risk of making each microtheory susceptible to the same criticisms they made about expert systems in the first place. How can different microtheories support each other without getting in each other's way? How does the system know which microtheory to select given a certain query? These and similar questions remain unanswered in the reviews.

One of the most fascinating things about human mind, and human memory in particular, is the fact that we don't run into the above problem. Imagine your long term memory, full of millions of small facts ranging from your mother's face, to your social-security number, to the players of your favorite football team. Yet when you are trying to come up with ideas to solve a particular question in a physics test, none of these things seem to show up.

Other than making Cyc focus on domains, microtheories have the additional function of keeping it consistent. Why are they trying to keep Cyc consistent? Why do they insist on checking every new assertion against everything else that is already in the knowledge base to make sure things do not clash? After all, a program could just remember every entry on a particular topic with who typed it when, and give all the different answers with their sources when probed with a query.

Deductive systems are extremely sensitive to inconsistency. That's why there are people in the world whose field of expertise is building truth maintenance systems. A deductive system happens to crush rather abruptly if you assert two contradictory statements:



This small derivation starting from a tautology shows that a conjunction of an assertion with its negation can be used to deduce q, no matter what q is. In an inconsistent system every statement will have a proof. Thus the system will cease to be interesting, because it will fail its main function as a logic: the ability to separate the set of all statements into true and false.

This state of affairs is rather expected. I will argue in the next section that logic initially was invented as the science of demonstrative arguments, in which case finding out who is right and who is wrong is all that matters. When it got applied to the science of thinking, rather ugly things started showing themselves. As one typical example, new information was able to change the old conclusions. This is never the case in traditional logic. If a statement can be proven from already existing axioms, it remains true no matter what you add to the system. To cope with this, the field of ``non-monotonic reasoning'' was born. To handle further problems, frame axioms, closed world assumption, circumscription, default reasoning etc. had to be invented.

The immunity of humans to inconsistency is a remarkable fact. In a science fiction story by C. Cherniak [Hofstadter and Dennett, 1981], an artificial intelligence researcher enters a state of trance in front of his computer. His friends finally notice the problem after a few days and try to wake him up. He never comes out of the trance, though, and dies within a couple of days. Mysteriously, people who dig up his work, look at his files share the same terrible fate. At the conclusion it is revealed that the first victim actually had discovered the Gödel sentence for humans. Thank God, we are not created as deductive systems. Tuttle and Smith discuss various ways in which human thought can be different from logical automatons [Tuttle, 1993,Smith, 1991].

One wonders, when faced with solutions too complicated, whether one is even at the right search space.



next up previous
Next: The historical roots Up: Modularity Previous: Epistemological level /



Deniz Yuret
Tue Apr 1 21:26:01 EST 1997