Publication

Layerwise Relevance Visualization in Convolutional Text Graph Classifiers

Robert Schwarzenberg, Marc Hübner, David Harbecke, Christoph Alt, Leonhard Hennig

In: Workshop on Graph-Based Natural Language Processing. Conference on Emperical Methods in Natural Language Processing (EMNLP-2019) November 3-7 Hong Kong Hong Kong SAR of China ACL 2019.

Abstract

Representations in the hidden layers of Deep Neural Networks (DNN) are often hard to interpret since it is difficult to project them into an interpretable domain. Graph Convolutional Networks (GCN) allow this projection, but existing explainability methods do not exploit this fact, i.e. do not focus their explanations on intermediate states. In this work, we present a novel method that traces and visualizes features that contribute to a classification decision in the visible and hidden layers of a GCN. Our method exposes hidden cross-layer dynamics in the input graph structure. We experimentally demonstrate that it yields meaningful layerwise explanations for a GCN sentence classifier.

Projekte

18_schwarzenberg.pdf (pdf, 202 KB)

German Research Center for Artificial Intelligence
Deutsches Forschungszentrum für Künstliche Intelligenz