Recently, a constraint-based view of semantic representation has become
quite popular in the area of computational semantics, e.g.,
[Fenstad et al.
1987], [Pollard and Sag1987], [Alshawi and Pulman1992], and
[Nerbonne1992]. The main advantage of representing semantic
information as feature structures is that it allows to express a simple
and systemic syntax/semantic interface, since `` it harmonizes
so well with the way in which syntax is now normally described; this
close harmony means that syntactic and semantic processing
can
be tightly coupled as one wishes - indeed, there needn't be any
fundamental distinction between them at all. In feature-based
formalisms, the structure shared among syntactic and semantic values
constitutes the interface in the only sense in which this exists.''
[Nerbonne1992], page 3. Thus the constraint-based view sees the
interface as being specified as a set of constraints, to which
non-syntactic information (e.g., phonological or even pragmatic
information) may contribute.
From a processing point of view, the advantages of viewing semantic information directly as part of a constraint-based grammar, is that not only a parallel view on the different levels of description is possible, but that the relationship between these levels can be stated completely declaratively. Processing of semantic information can then be performed in tandem with the processing of syntactic information, using the same basic constraint-solving mechanism, e.g., unification. This means that there are no special processes needed for mapping syntactic information to semantic information and vice versa (at least with respect to grammatical processing). This is especially useful in the case of generation, where the basic task is to find for given semantic information represented as a feature structure the strings licensed by the grammar.
Although the techniques for processing of reversible grammars are supposed to abstract away from the different particularities of phonological and semantic representation, we have to define some simple semantic structures to be able to illustrate our methods to be developed in the next chapters by some concrete examples. For this reason we will simply represent semantic structures essentially as predicate argument structures (following [VanNoord1993]). For example, the binary predicate `erzählen' (meaning `to tell') will be represented as follows:
where the feature PRED specifies the name of the predicate, the value of SORT specifies the arity, and the features ARG1 and ARG2 hold the semantic structures of the arguments. As another example consider the representation of the null-ary predicate `lügen' (meaning of the noun `lies'):
If we assume that semantic structures are bound to the feature SEM then the simplified relationship between the phonological string ``peter erzählt lügen'' and its semantic representation `erzählen(peter,lügen)' would be the feature structure
Modifier constructions such as noun-adjective constructions or adverbial modifications will be represented using the feature MOD, that holds (the possibly complex) semantic structure of the modifier. However, instead of placing the MOD feature at the same level as the ARGs feature we will bundle the semantics of the modified predicate argument structure under the feature ARG1. Thus a modifier construction consists of a feature structure with top-level feature MOD and ARG1. The sortal value of such constructions will be restricted to the value MODIFIER. Therefore, the semantic structure of an utterance ``Heute erzählt peter gerne lügen'' may look as follows: