next up previous contents
Next: General flow of control Up: A Uniform Tabular Algorithm Previous: Generation

Properties

 

The uniform tabular algorithm is a straightforward extension of the optimized general SLD-resolution rule whose correctness is proven in [Höhfeld and Smolka1988]. Thus it also inherits this property. This can be seen as follows.

Each inference rule applies the optimized goal-reduction rule, i.e., it is checked whether two clauses can be combined to build a resolvent, whereby satisfiability of the constraints involved is checked immediately by means of unification. If unification fails, it is known that this part of the search space cannot contain an answer, and consequently the two clauses will not derive a lemma.

The prediction rule generates new items on the basis instantiations of grammar clauses, i.e., it predicts (instantiated) axioms defined by the grammar. The scanning rule performs a reduction of an active item using an instantiation of a lexical clause. Thus it also uses axioms for reduction. The rules passive-completion and active-completion combine two already derived lemmas. However, these completion operations can only be performed if either prediction or scanning has been applied. Therefore, these rules are performed on axioms or consequences of axioms. Since the completion rules reduce the body of an active lemma they will transform it to a passive lemma by repeated application. Thus the selection function has fewer and fewer literals from which to select.

Each inference rule is fair because it will generate a set of items where each item correspond to a possible alternative branch at that point of the derivation. Thus, they have a ``built-in breadth-first'' component, which guarantees completeness.

The indexing mechanism does not destroy this property because of the following reasons. Firstly, the indices are determined directly form the value of the essential feature, which is done after unification successfully takes place. Secondly, the completion rules active-completion and passive-completion need only to consider their corresponding item sets because it is guaranteed that all phrases of a given type are stored in the table before any attempt is made to use the table to look for phrases of that type [Pereira and Shieber1987]. Furthermore, passive-completion and active-completion can only take place after prediction and scanning have been performed. Finally, the backward pointer mechanism as defined in 4.9, guarantees that no solution of a sub-goal are out of view because all items in one item set share one common property, namely that they are compatible with respect to the value of the essential feature of one of their literals, which is the head in the case of an unit lemma, and the selected element in the case of an non-unit lemma. Thus, an item set is a meeting place of active and passive items, such that an active item looks for some passive item to resolve with, and vice versa, that a passive item looks for an active item which it can resolve. However, both are identical with respect to the value of their essential feature.

All derived candidate items are first added to the agenda before they are placed into the appropriate item sets. The agenda control guarantees that all derived items will be inserted into an item set, which means that they are possibly applied by next application of the inference rules. Before an item is added to an item set, the blocking test checks whether there is already a subsuming one in there. If so, the item is not added. Thus, the blocking test guarantees that only most general lemmas are kept in the item sets.




next up previous contents
Next: General flow of control Up: A Uniform Tabular Algorithm Previous: Generation

Guenter Neumann
Mon Oct 5 14:01:36 MET DST 1998