Skip to main content Skip to main navigation

Publication

The QALL-ME Benchmark: a Multilingual Resource of Annotated Spoken Requests for Question Answering

Elena Cabrio; Milena Kouylekov; Bernardo Magnini; Matteo Negri; Laura Hasler; Constantin Orasan; David Tomás; José L. Vicedo; Günter Neumann; Corinna Weber
In: Proceedings of the 6th International Conference on Language Resources and Evaluation. International Conference on Language Resources and Evaluation (LREC-2008), May 28-30, Marrakech, Morocco, ELRA, 2008.

Abstract

This paper presents the QALL-ME benchmark, a multilingual resource of annotated spoken requests in the tourism domain, freely available for research purposes. The languages currently involved in the project are Italian, English, Spanish and German. It introduces a semantic annotation scheme for spoken information access requests, specifically derived from Question Answering (QA) research. In addition to pragmatic and semantic annotations, we propose three QA-based annotation levels: the Expected Answer Type, the Expected Answer Quantifier and the Question Topical Target of a request, to fully capture the content of a request and extract the sought-after information. The QALL-ME benchmark is developed under the EU-FP6 QALL-ME project which aims at the realization of a shared and distributed infrastructure for Question Answering (QA) systems on mobile devices (e.g. mobile phones). Questions are formulated by the users in free natural language input, and the system returns the actual sequence of words which constitutes the answer from a collection of information sources (e.g. documents, databases). Within this framework, the benchmark has the twofold purpose of training machine learning based applications for QA, and testing their actual performance with a rapid turnaround in controlled laboratory setting.