Publikation
Elucidating linear programs by neural encodings
Florian Peter Busch; Matej Zecevic; Kristian Kersting; Devendra Singh Dhami
In: Georgios Leontidis (Hrsg.). Frontiers in Artificial Intelligence, Vol. 8, Pages 1-14, Frontiers Media SA, 6/2025.
Zusammenfassung
Linear Programs (LPs) are one of the major building blocks of AI and have
championed recent strides in differentiable optimizers for learning systems.
While efficient solvers exist for even high-dimensional LPs, explaining their
solutions has not received much attention yet, as explainable artificial intelligence
(XAI) has mostly focused on deep learning models. LPs are mostly considered
white-box and thus assumed simple to explain, but we argue that they are not
easy to understand in terms of relationships between inputs and outputs. To
mitigate this rather non-explainability of LPs we show how to adapt attribution
methods by encoding LPs in a neural fashion. The encoding functions consider
aspects such as the feasibility of the decision space, the cost attached to
each input, and the distance to special points of interest. Using a variety of
LPs, including a very large-scale LP with 10k dimensions, we demonstrate the
usefulness of explanation methods using our neural LP encodings, although the
attribution methods Saliency and LIME are indistinguishable for low perturbation
levels. In essence, we demonstrate that LPs can and should be explained, which
can be achieved by representing an LP as a neural network.
