A Multilingual Benchmark for Probing Negation-Awareness with Minimal Pairs

Mareike Hartmann, Miryam de Lhoneux, Daniel Hershcovich, Yova Kementchedjhieva, Lukas Nielsen, Chen Qiu, Anders Søgaard

In: Proceedings of the 25th Conference on Computational Natural Language Learning (CoNLL). Conference on Computational Natural Language Learning (CoNLL-2021) November 10-11 Online Dominican Republic Seiten 224-257 Association for Computational Linguistics 11/2021.


Negation is one of the most fundamental concepts in human cognition and language, and several natural language inference (NLI) probes have been designed to investigate pretrained language models' ability to detect and reason with negation. However, the existing probing datasets are limited to English only, and do not enable controlled probing of performance in the absence or presence of negation. In response, we present a multilingual (English, Bulgarian, German, French and Chinese) benchmark collection of NLI examples that are grammatical and correctly labeled, as a result of manual inspection and editing. We use the benchmark to probe the negation-awareness of multilingual language models and find that models that correctly predict examples with negation cues often fail to correctly predict their counter-examples {\em without} negation cues, even when the cues are irrelevant for semantic inference.


Weitere Links

2021.conll-1.19.pdf (pdf, 1 MB )

Deutsches Forschungszentrum für Künstliche Intelligenz
German Research Center for Artificial Intelligence