Seminar: March 6

Gyorgy Turan, University of Illinois at Chicago

Belief revision and commonsense knowledge bases

The field of knowledge representation and reasoning in AI is about finding expressive and tractable frameworks for representing knowledge that may be incomplete, incorrect, inconsistent and changing all the time, and reasoning with this knowledge in ways that may differ from mathematical reasoning.

This is one of the classical challenges of AI, still largely unresolved. The relatively recent availability of large knowledge bases of commonsense knowledge poses new challenges and opportunities. In particular, it seems that there are interesting open problem areas to develop new theoretical models and efficient algorithms.

We discuss two topics in this huge area.

Belief revision deals with the problem of updating a knowledge base if new, contradictory information is received. The main approach, due to Alchourron, Gardenfors and Makinson (1985) is to formulate postulates and characterize those revision operators which satisfy the postulates. Full propositional logic is assumed and so the operators are inherently intractable (often even harder than NP). Recently there has been work on belief revision for the Horn fragment of propositional logic. We describe some of the results of this, currently very active, area.

The development of large knowledge bases of commonsense knowledge for the purposes of commonsense reasoning has been going on for several decades. Examples of such systems are Cyc and ConceptNet. We propose an approach to compare and evaluate the capabilities of such systems that may also be useful for suggesting directions to develop these systems, and describe the initial results of some experiments.

Joint work with Kira Adaricheva, Stellan Ohlsson, Bob Sloan, Balazs Szorenyi, Daniel Uber and Aaron Urasky.