DFKI-LT - Dissertation SeriesVol. XXXV
Bart Cramer: Improving the feasibility of preceision-oriented HPSG parsing
price: € 16
This thesis will focus on the feasibility of precision-oriented parsing in the framework of Head-driven Phrase Structure Grammar (Pollard and Sag 1994). Such parsers, traditionally based on hand-written grammars, offer detailed semantic analyses of the language. However, there are a number of barriers that need to be overcome before such a parser can be successfully deployed, most notably the grammarís long development time. Statistical parsers are less prone to this issue, but do not offer the same depth of analysis that hand-written deep grammars can. A number of approaches (in different linguistic formalisms, often highly lexicalised) have been proposed that aim to combine the advantages of both types of parsers, usually by converting/enriching an existing treebank to a deeper linguistic formalism, after which a deep grammar can be learnt from the richer resource of annotated data. This thesis has a comparable aim, but approaches the problem from the perspective of precision-oriented parsing, automating as much as possible, and crafting by hand what is necessary, for instance because it is not learnable from the available resources. The German language is taken as object of study.
A full-fledged deep grammar (in the DELPH-IN toolchain, in which the research is embedded) minimally consists of the following components: a set of constructions, a lexicon, a morphological analyser, a treebank and a disambiguation model based on that treebank. All these components will be created in the first half of the thesis, trying to minimise the effort that is needed for the creation of each component. It is argued that HPSG constructions are too complicated to learn, and are therefore hand-written. Augmented with a small lexicon of syntactically or semantically idiosyncratic lexemes, this forms the core grammar. Naturally, the core grammar is based on available HPSG analyses of German (and Dutch) in the literature, of which an overview will be given. One of the contributions of the specific core grammar in this thesis is the novel treatment of word order and topological fields, based on an implementation of FSAs in the typed feature structure formalism that is used in the DELPH-IN framework.
Consequently, the lexicon is constructed automatically from a detailed dependency treebank (the Tiger treebank: Brants et al. 2002), in a deep lexical acquisition step. The syntactic properties of the lexical entries, such as subcategorisation frames and modification constraints, are recognised on the basis of the dependency labels that define the relation between constituents. Additionally, a partial mapping between word forms and lexemes is learnt, which functions as the grammarís morphological analyser.
A link is made between the output of the grammar and the dependencies that can be derived from the Tiger treebank. This entails that the normal output of a DELPH-IN grammar (Minimal Recursion Semantics: Copestake et al. 2005) will not be used. Instead, the grammar and Tiger treebank are interfaced by syntactic dependencies. A novel way to test the chain of core grammar, deep lexical acquisition and MRS conversion is introduced (unit testing), allowing the grammar writer to track the influence of a change in any of the components has on the correctness of the grammar. Furthermore, the link between the output of the grammar and the Tiger treebank makes an automatic disambiguation between licensed readings possible, allowing the automatic creation of an HPSG treebank.
A held-out part of the gold standard is used as an evaluation set, showing the performance of the parser (the combination of a parsing algorithm and the grammar) on unseen text. The performance is measured across multiple dimensions, such as development time, linguistic relevance and coverage, and is the background to a larger discussion, in which the grammar is situated in and compared to an array of hand-written and learnt parsers. A number of areas for improvement were found, and two of them were addressed afterwards: (lack of) efficiency and robustness. In these experiments, the grammar was left mostly unchanged, and the PET parser, that takes the grammar as input, was altered.
In its standard setting, the PET parser executes all parser tasks (unifications). The agenda of parser tasks was changed in such a way that more promising tasks are carried out first. Less promising tasks are deferred or even discarded. A generative model of HPSG rule applications is at the basis of determining the relative order of the tasksí priorities. A number of strategies were introduced to decide which tasks are pruned from the agenda. Good results were achieved using this technology, showing both an increase of accuracy and a manifold speed-up.
The relative fragility of precision-oriented parsers is the second aspect of the parser that was improved. A common approach to find an analysis for an unlicensed sentence is to return a set of recognised fragments. In our experiments, a variant of this fragment parsing strategy functioned as the baseline. A new method was introduced, in which highly over-generating robustness rules were created by the grammar writer. The added rules are meant to take up unrecognised objects, minimising the damage they cause, and are part of the normal parsing process, although highly dispreferred by the statistical models. The use of robustness rules yields better solutions than the fragment parsing approach, but does not cover the entire test set. A combination of both strategies, therefore, resulted in higher f-scores than fragment parsing, retaining full coverage.