Overview of the CLEF 2007 Multilingual Question Answering Track

D. Giampiccolo; A. Penas; C. Ayache; D. Cristea; P. Forner; V. Jijkoun; P. Osenva; P. Rocha; Bogdan Sacaleanu; R. Sutcliffe

In: A. Nardi; C. Peters (Hrsg.). CLEF 2007 Working Notes. Conference and Labs of the Evaluation Forum (CLEF), Online-Proceedings, 9/2007.


The fifth QA campaign at CLEF, the first having been held in 2006. was characterized by continuity with the past and at the same time by innovation. In fact, topics were introduced, under which a number of Question-Answer pairs could be grouped in clusters, containing also co-references between them. Moreover, the systems were given the possibility to search for answers in Wikipedia. In addition to the main task, two other tasks were offered, namely the Answer Validation Exercise (AVE), which continued last year's successful pilot, and QUAST, aimed at evaluating the task of Question Answering in Speech Transcription. As general remark, it must be said that the task proved to be more difficult than expected, as in comparison with last year's results the Best Overall Accuracy dropped from 49,47% to 41,75% in the multilingual subtasks, and, more significantly, from 68,95% to 54% in the monolingual subtasks.

OverviewCLEF2007WN.pdf (pdf, 260 KB )

Deutsches Forschungszentrum für Künstliche Intelligenz
German Research Center for Artificial Intelligence