Parse reranking for domain-adaptative relation extraction

Feiyu Xu; Hong Li; Yi Zhang; Hans Uszkoreit; Sebastian Krause
In: Journal of Logic and Computation (JLC), Vol. n.A. Pages 0-0, Oxford Univ. Press, 2012.


The article demonstrates how generic parsers in a minimally supervised information extraction framework can be adapted to a given task and domain for relation extraction (RE). For the experiments, two parsers that deliver n-best readings are included: (1) a generic deep-linguistic parser (PET) with a largely hand-crafted head-driven phrase structure grammar for English (ERG); (2) a generic statistical parser (Stanford Parser) trained on the Penn Treebank. It will be shown how the estimated confidence of RE rules learned from the n-best parses can be exploited for parse reranking for both parsers. The acquired reranking model improves the performance of RE in both training and test phases with the new first parses. The obtained significant boost of recall does not come from an overall gain in parsing performance but from an application-driven selection of parses that are best suited for the RE task. Since the readings best suited for the successful extraction of rules and instances are often not the readings favoured by a regular parser evaluation, generic parsing accuracy actually decreases. The novel method for task-specific parse reranking does not require any annotated data beyond the semantic seed, which is needed anyway for the RE task.



Weitere Links