Skip to main content Skip to main navigation


One Explanation Does Not Fit XIL

Felix Friedrich; David Steinmann; Kristian Kersting
In: Krystal Maughan; Rosanne Liu; Thomas F. Burns (Hrsg.). The First Tiny Papers Track at ICLR 2023. International Conference on Learning Representations (ICLR-2023), May 1-5, Kigali, Rwanda,, 2023.


Current machine learning models produce outstanding results in many areas but, at the same time, suffer from shortcut learning and spurious correlations. To address such flaws, the explanatory interactive machine learning (XIL) framework has been proposed to revise a model by employing user feedback on a model's explanation. This work sheds light on the explanations used within this framework. In particular, we investigate simultaneous model revision through multiple explanation methods. To this end, we identified that textitone explanation does not fit XIL and propose considering multiple ones when revising models via XIL.

Weitere Links