There is a class of computer programs, known as expert systems, that aim to mimic human reasoning. The methods and techniques used to build these programs are the outcome of efforts in a field of computer science known as Artificial Intelligence (AI). Expert systems have been built to diagnose disease (Pathfinder is an expert system that assists surgical pathologists with the diagnosis of lymph-node diseases, aid in the design chemical syntheses (Example), prospect for mineral deposits (PROSPECTOR), translate natural languages, and solve complex mathematical problems(MACSYMA).
Our purpose in writing this description of expert systems is to demystify these programs by revealing their surprisingly simple architecture, explaining a method by which they can be constructed inexpensively, and illustrating, using a simple example, the process by which they seem to reason.
In conventional computer programs, problem-solving knowledge is encoded in program logic and program-resident data structures. Expert systems differ from conventional programs both in the way problem knowledge is stored and used.
An expert system is a computer program, with a set of rules encapsulating knowledge about a particular problem domain (i.e., medicine, chemistry, finance, flight, et cetera). These rules prescribe actions to take when certain conditions hold, and define the effect of the action on deductions or data. The expert system, seemingly, uses reasoning capabilities to reach conclusions or to perform analytical tasks. Expert systems that record the knowledge needed to solve a problem as a collection of rules stored in a knowledge-base are called rule-based systems.
Expert systems are especially important to organizations that rely on people who possess specialized knowledge of some problem domain, especially if this knowledge and experience cannot be easily transferred. Artificial intelligence methods and techniques have been applied to a broad range of problems and disciplines, some of which are esoteric and others which are extremely practical.
One of the early applications, MYCIN, was created to help physicians diagnose and treat bacterial infections. Expert systems have been used to analyze geophysical data in our search for petroleum and metal deposits (e.g., PROSPECTOR). They are used by the investments, banking, and telecommunications industries. They are essential in robotics, natural language processing, theorem proving, and the intelligent retrieval of information from databases. They are used in many other human endeavors which might be considered more practical. Rule-based systems have been used to monitor and control traffic, to aid in the development of flight systems, and by the federal government to prepare budgets.
A rule-based, expert system maintains a separation between its Knowledge-base and that part of the system that executes rules, often referred to as the expert system shell. The system shell is indifferent to the rules it executes. This is an important distinction, because it means that the expert system shell can be applied to many different problem domains with little or no change. It also means that adding or modifying rules to an expert system can effect changes in program behavior without affecting the controlling component, the system shell.
The language used to express a rule is closely related to the language subject matter experts use to describe problem solutions. When the subject matter expert composes a rule using this language, he is, at the same time, creating a written record of problem knowledge, which can then be shared with others. Thus, the creation of a rule kills two birds with one stone; the rule adds functionality or changes program behavior, and records essential information about the problem domain in a human-readable form. Knowledge captured and maintained by these systems ensures continuity of operations even as subject matter experts (i.e., mathematicians, accountants, physicians) retire or transfer.
Furthermore, changes to the Knowledge-base can be made easily by subject matter experts without programmer intervention, thereby reducing the cost of software maintenance and helping to ensure that changes are made in the way they were intended. Rules are added to the knowledge-base by subject matter experts using text or graphical editors that are integral to the system shell. The simple process by which rules are added to the knowledge-base is depicted in Figure 1.
Finally, the expert system never forgets, can store and retrieve more knowledge than any single human being can remember, and makes no errors, provided the rules created by the subject matter experts accurately model the problem at hand.
An expert system is, typically, composed of two major components, the Knowledge-base and the Expert System Shell. The Knowledge-base is a collection of rules encoded as metadata in a file system, or more often in a relational database. The Expert System Shell is a problem-independent component housing facilities for creating, editing, and executing rules. A software architecture for an expert system is illustrated in Figure 2.
The shell portion includes software modules whose purpose it is to,
The Client Interface processes requests for service from system-users and from application layer components. Client Interface logic routes these requests to an appropriate shell program unit. For example, when a subject matter expert wishes to create or edit a rule, they use the Client Interface to dispatch the Knowledge-base Editor. Other service requests might schedule a rule, or a group of rules, for execution by the Rule Engine.
The Knowledge-base Editor is a simple text editor, a graphical editor, or some hybrid of these two types. It provides facilities that enable a subject matter expert to compose and add rules to the Knowledge-base.
Rules, as they are composed by subject matter experts, are not directly executable. They must first be converted from their human-readable form into a form that can be interpreted by the Rule Engine. Converting rules from one form to another is a function performed by the Rule Translator.
The translation of rules in their original form to a machine-readable form requires parsing the textual representation to obtain a data structure referred to as an Abstract Syntax Tree (AST). The AST representation of rules is the memory-resident data structure that guides execution of the inference engine when it comes time to apply a rule. The AST is an abstract data type designed to make its interpretation by the Rule Engine simple and efficient. This abstract data type is very expressive and permits the construction of very complex and powerful rules. There is a third form in which rules may be expressed. A rule AST is converted into an equivalent form suitable for storage in the Knowledge-base. The way in which this information appears in the Knowledge-base depends on the storage technology. Relational databases provide an especially convenient and efficient means for storing metadata. The metadata corresponding to an AST in this case would reside in a collection of tables. The specific scheme chosen for saving the metadata must permit the ASTs to be rapidly reconstructed using database queries.
The diagram in Figure 3 summarizes the several conversion operations the Rule Translator must perform as it adds rules to the Knowledge-base and as it retrieves rules from the Knowledge-base for execution.
To make these ideas concrete, consider the arithmetic expression,
(a-b) / [x*(c-d)] ,
that might form some part of a rule. In the AST representation, this portion of the rule would be expressed as a binary tree, like the one in Figure 4, whose nodes are either arithmetic operators or operands.
Once created, AST representations are converted into rule metadata and stored in the Knowledge-base. Rule metadata is simply a compact representation of ASTs. The role of the Rule Translator in the rule editing process is illustrated in the data flow diagram of Figure 5.
In translating rules from one form to another, the structure of the original rule is never lost. It is always possible to recreate a human-readable rule, exactly, from its Knowledge-base representation or from its AST representation.
The Rule Engine (often referred to as an inference engine in AI literature) is responsible for executing Knowledge-base rules. It retrieves rules from the Knowledge-base, converts them to ASTs, and then provides them to its rule interpreter for execution. The Rule Engine interpreter traverses the AST, executing actions specified in the rule along the way. This process is depicted in Figure 6.
The shell component, Rule Object Classes, is a container for object classes supporting,
The construction of an expert system is less challenging than one might think, given the almost magical powers attributed to this class of programs. The task is made easier because,
The design and the construction of the expert system involves the four major steps depicted in Figure 7.
The surest way to make certain the expert system is well-received is to design a rule language that is easy for users to learn and easy to use. The language must serve the needs of the subject matter expert. Subject matter experts need a language that mimics the way they speak, think, and operate. Its lexical elements and syntax rules must make the resulting language appear as natural (i.e., similar to the spoken language) as possible. These goals must be balanced with the need to create a language that can be recognized by translators created with conventional translation writing tools.
Rules must be expressed using a simple grammar, free of the idiosyncrasies of natural languages. Every sentence of the language used to express rules must have precise and unvarying meanings. A natural language with its inherent ambiguities is difficult to translate. This is because, in a natural language, words do not have a single meaning, nor are word meanings precise. In a natural language the meaning of words and phrases are colored by their context. Machine translation of natural languages is possible, but the program logic required is complex, expensive, and not wholly reliable. Machine translation of natural languages relies largely on heuristics that make translations imperfect.
On the other hand, context-free languages can be made to appear natural-like. However, as any programmer knows, these languages are unforgiving of spelling and grammar errors. Fortunately, methods for the translation of context-free languages are rooted in a mature and well-understood mathematics. Tools for building context-free languages translators make their construction straightforward and reliable.
The tools used to generate translators for context-free languages require precise language definitions. These language definitions are called grammars. A sample grammar defining a hypothetical rule language appears in Figure 8.
The grammar defines both the syntax and the semantics of the language. The grammar production extending across lines 13 and 14,
rule_definition -> rule rule_name rule_description is preamble_sequence statement_sequence end rule_name ;
defines the structure of a rule. This production is read: rule_definition produces the keyword, rule, followed by something called rule_name, followed by something called rule_description, followed by the keyword, is, followed by something called preamble_sequence, followed by something called statement_sequence, followed by the keyword, end, followed by something called rule_name.
A simple, but meaningless rule, conforming to our hypothetical grammar appears in Figure 9.
This rule might be interpreted to mean that the value of the expression, x*(-(y+z)) is assigned to the object, E, appearing on the right-hand side of the equals sign.
The rules composed by subject matter experts must be translated and stored in the Knowledge-base before they can be executed. Rules are transformed into machine-readable forms by the Rule Translator. The Rule Translator is generated automatically from lexical specifications and from the rule grammar. Lexical specifications define the lexical units (e.g., reserved words, numerical values, operators, etc) of the expert systems rule language. The rule grammar defines the syntax and semantics of the language.
Figure 10 illustrates the method by which the Rule Translator is automatically generated. The lexical analyzer generator depicted in this diagram operates on a definition of the languages lexical units to produce a software component referred to as a lexical analyzer or scanner. The scanner reads and converts rules in text form into a sequence of lexical units called tokens. Each token is a sequence of logically related characters having special meaning. For example, the text,
A stitch in time saves nine.,
might be decomposed by a lexical analyzer, designed for use by a grammarian or linguist, into the token sequence,
article, noun, preposition, noun, verb, noun, period
These token streams are used by the rule parser to structure the rule into larger, meaningful units called language constructs (e.g., control statements, assignment statements, etc.), and to perform the semantic actions (e.g., creation of AST elements) required by the constructs.
The parser generator produces two software components, a Rule Parser and a collection of semantic action subprograms. The Rule Parser, receives token streams from the Scanner. It structures these tokens into meaningful units (e.g., control statements, assignment statements, etc.). The parser invokes semantic actions (i.e., creation of AST elements) appropriate to the constructs it discovers. Semantic actions are software units that are dispatched by the parser as it recognizes language constructs in the rule. These semantic actions build the Abstract Syntax Trees (ASTs) discussed in previous sections.
The Expert Systems Rule Editor provides the means for subject matter experts to compose rules. Some people prefer graphical editors. Graphical editors with the expressive power needed to precisely specify complex rules, however, are difficult to build, and might very well end up being too unwieldy for subject matter experts to use. Text editors are simple to use and are rather easy to build. The disadvantage of text editors is this: they are unforgiving of spelling and syntax errors. This fact forces subject matter experts to understand rule syntax, simple though it might be. To relieve the subject matter experts of this burden and to speed development of rules, the editor could be equipped with special aids. These might include a syntax-directed editing feature, drop-down, menus, checkboxes, radio buttons, and help facilities.
Rather than build a text editor from scratch, commercially available editor similar to TextPad, could be purchased and substituted for a custom-built editor.
The Rule Engine executes (or more properly, interprets) Knowledge-base rules. Execution of rules involves performing the steps diagrammed in Figure 11.
Rule actions may include:
The retrieval of rule metadata and the construction of a memory-resident AST, operations corresponding to steps 1 and 2 depicted in Figure 11, could be performed by a single reference to a hypothetical method, Rule.AST, within the Rule Object Classes component:
rule_AST := Rule.AST(rule_identifier) ;
The method, RULE.AST, might query the Knowledge-base for elements of the rule identified by its input parameter, and assemble those elements into an attributed AST defining the rule.
To illustrate, consider the assignment statement,
E = x * (- (y + z)) ;
appearing in our sample rule. For the sake of simplicity, assume that the only purpose of the rule is to assign a value to the left-hand side of the assignment statement. Somewhere in that rule, attribute values for the variables, x, y, and z have been recorded.
The defining information for the terms x, y and z (as well as all other symbols referenced in the rule) is kept in a symbol table connected to the rule AST. If the information needed to assign values to x, y, and z is kept in a database, then database queries would be generated dynamically by the Rule Engine to obtain values for these variables. The information needed by the from clause and where clause of these queries is drawn from the rules symbol table. Suppose the queries return values of 200, 150, and 75 for x, y, and z, respectively.
The Rule Engine is now prepared to evaluate the rules AST, which in this case, looks something like the binary tree in Figure 12,
The evaluation of the assignment operators right expression subtree is performed using the method described by the algorithm shown in Figure 13.
The three diagrams in Figure 14 trace execution of the expression evaluation algorithm and attempts to show the machinations of this algorithm as it works to evaluate the arithmetic expression x*(-(y+z)).
Finally, the Rule Engine moves the value of the left-hand side expression to the destination variable, E. What it does afterward depends on how the rule was written.
The object of this last exercise was to eliminate any anxiety a developer might feel, by demonstrating the simplicity of the scheme by which rules are executed. Rules are not executed by magic. We, simply, apply a variety of programming techniques that should be familiar to any trained computer scientist.
OmniLexer An automatic code generation tool for building lexical analyzers
CompilerWorkbench Generates language translators from grammars
A powerful lex-
ical analyzer generator for the Ada, C++, C, Java, Ada, and PL/SQL pro-
This compiler generator pro-
duces SLR1, LR1, and LALR1 parsers from attributed grammars. Learn more…
This collection of utilities and libraries is used to construct navigation and guidance programs. It features a user-definable shell for controlling and monitoring execution of flight programs. Learn more…
This document generator oper-
ates on program source code to produce detailed Software Design Descriptions conforming to government requirements.
This data migration tool transports legacy, file-based and rel-
ational data to newly designed relational databases