Publication
Retrieving Argument Graphs Using Vision Transformers
Kilian Bartz; Mirko Lenz; Ralph Bergmann
In: Elena Chistova; Philipp Cimiano; Shohreh Haddadan; Gabriella Lapesa; Ramon Ruiz-Dolz (Hrsg.). Proceedings of the 12th Workshop on Argument Mining. ACL Workshop on Argument Mining (ArgMining-2025), located at ACL-2025, Vienna, Austria, Austria, Pages 32-45, ISBN 979-8-89176-258-9, Association for Computational Linguistics, 7/2025.
Abstract
Through manual annotation or automated argument mining processes, arguments can be represented not only as text, but also in structured formats like graphs. When searching for relevant arguments, this additional information about the relationship between their elementary units allows for the formulation of fine-grained structural constraints by using graphs as queries. Then, a retrieval can be performed by computing the similarity between the query and all available arguments. Previous works employed Graph Edit Distance (GED) algorithms such as A* search to compute mappings between nodes and edges for determining the similarity, which is rather expensive. In this paper, we propose an alternative based on Vision Transformers where arguments are rendered as images to obtain dense embeddings. We propose multiple space-filling visualizations and evaluate the retrieval performance of the vision-based approach against an existing A* search-based method. We find that our technique runs orders of magnitude faster than A* search and scales well on larger argument graphs while achieving competitive results.