Skip to main content Skip to main navigation


Interactively Providing Explanations for Transformer Language Models

Felix Friedrich; Patrick Schramowski; Christopher Tauchmann; Kristian Kersting
In: Stefan Schlobach; María Pérez-Ortiz; Myrthe Tielman (Hrsg.). HHAI 2022: Augmenting Human Intellect - Proceedings of the First International Conference on Hybrid Human-Artificial Intelligence. International Conference on Hybrid Human-Artificial Intelligence (HHAI-2022), June 13-17, Amsterdam, Netherlands, Pages 285-287, Frontiers in Artificial Intelligence and Applications, Vol. 354, IOS Press, 2022.


Transformer language models are state of the art in a multitude of NLP tasks. Despite these successes, their opaqueness remains problematic. Recent methods aiming to provide interpretability and explainability to black-box models primarily focus on post-hoc explanations of (sometimes spurious) input-output correlations. Instead, we emphasize using prototype networks directly incorporated into the model architecture and hence explain the reasoning process behind the network's decisions. Our architecture performs on par with several language models and, moreover, enables learning from user interactions. This not only offers a better understanding of language models but uses human capabilities to incorporate knowledge outside of the rigid range of purely data-driven approaches.

Weitere Links