Publikation

Pattern-Guided Integrated Gradients

Robert Schwarzenberg, Steffen Castle

In: Proceedings of the ICML 2020 Workshop on Human Interpretability in Machine Learning (WHI). International Conference on Machine Learning (ICML-2020) Vienna, Austria (online) ICML 2020.

Abstrakt

Integrated Gradients (IG) and PatternAttribution (PA) are two established explainability methods for neural networks. Both methods are theoretically well-founded. However, they were designed to overcome different challenges. In this work, we combine the two methods into a new method, Pattern-Guided Integrated Gradients (PGIG). PGIG inherits important properties from both parent methods and passes stress tests that the originals fail. In addition, we benchmark PGIG against nine alternative explainability approaches (including its parent methods) in a large-scale image degradation experiment and find that it outperforms all of them.

Projekte

pgig.pdf (pdf, 964 KB)

Deutsches Forschungszentrum für Künstliche Intelligenz
German Research Center for Artificial Intelligence