Skip to main content Skip to main navigation


Right for the Wrong Scientific Reasons: Revising Deep Networks by Interacting with their Explanations

Patrick Schramowski; Wolfgang Stammer; Stefano Teso; Anna Brugger; Xiaoting Shao; Hans-Georg Luigs; Anne-Katrin Mahlein; Kristian Kersting
In: Computing Research Repository eprint Journal (CoRR), Vol. abs/2001.05371, Pages 0-10, arXiv, 2020.


Deep neural networks have shown excellent performances in many real-world applications. Unfortunately, they may show "Clever Hans"-like behavior---making use of confounding factors within datasets---to achieve high performance. In this work, we introduce the novel learning setting of "explanatory interactive learning" (XIL) and illustrate its benefits on a plant phenotyping research task. XIL adds the scientist into the training loop such that she interactively revises the original model via providing feedback on its explanations. Our experimental results demonstrate that XIL can help avoiding Clever Hans moments in machine learning and encourages (or discourages, if appropriate) trust into the underlying model.

Weitere Links