Skip to main content Skip to main navigation

Publikation

The Role of Explainability in Collaborative Human-AI Disinformation Detection

Vera Schmitt; Luis-Felipe Villa-Arenas; Nils Feldhus; Joachim Meyer; Robert P. Spang; Sebastian Moeller
In: FAccT '24: Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency. ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT-2024), ISBN 9798400704505, Association for Computing Machinery, New York, NY, USA, 2024.

Zusammenfassung

Manual verification has become very challenging based on the increasing volume of information shared online and the role of generative Artificial Intelligence (AI). Thus, AI systems are used to identify disinformation and deep fakes online. Previous research has shown that superior performance can be observed when combining AI and human expertise. Moreover, according to the EU AI Act, human oversight is inevitable when using AI systems in a domain where fundamental human rights, such as the right to free expression, might be affected. Thus, AI systems need to be transparent and offer sufficient explanations to be comprehensible. Much research has been done on integrating eXplainability (XAI) features to increase the transparency of AI systems; however, they lack human-centered evaluation. Additionally, the meaningfulness of explanations varies depending on users’ background knowledge and individual factors. Thus, this research implements a human-centered evaluation schema to evaluate different XAI features for the collaborative human-AI disinformation detection task. Hereby, objective and subjective evaluation dimensions, such as performance, perceived usefulness, understandability, and trust in the AI system, are used to evaluate different XAI features. A user study was conducted with an overall total of 433 participants, whereas 406 crowdworkers and 27 journalists participated as experts in detecting disinformation. The results show that free-text explanations contribute to improving non-expert performance but do not influence the performance of experts. The XAI features increase the perceived usefulness, understandability, and trust in the AI system, but they can also lead crowdworkers to blindly trust the AI system when its predictions are wrong.

Projekte