Skip to main content Skip to main navigation

Publication

Compression Versus Accuracy: A Hierarchy of Lifted Models

Jan Speller; Malte Luttermann; Marcel Gehrke; Tanya Braun
In: Inês Lynce; Nello Murano; Mauro Vallati; Serena Villata; Federico Chesani; Michela Milano; Andrea Omicini; Mehdi Dastani (Hrsg.). Proceedings of the Twenty-Eighth European Conference on Artificial Intelligence. European Conference on Artificial Intelligence (ECAI-2025), October 25-30, Bologna, Italy, Pages 5051-5058, Vol. 413, IOS Press, 10/2025.

Abstract

Probabilistic graphical models that encode indistinguishable objects and relations among them use first-order logic constructs to compress a propositional factorised model for more efficient (lifted) inference. To obtain a lifted representation, the state-of-the-art algorithm Advanced Colour Passing (ACP) groups factors that represent matching distributions. In an approximate version using ε as a hyperparameter, factors are grouped that differ by a factor of at most (1 ± ε). However, finding a suitable ε is not obvious and may need a lot of exploration, possibly requiring many ACP runs with different ε values. Additionally, varying ε can yield wildly different models, leading to decreased interpretability. Therefore, this paper presents a hierarchical approach to lifted model construction that is hyperparameter-free. It efficiently computes a hierarchy of ε values that ensures a hierarchy of models, meaning that once factors are grouped together given some ε, these factors will be grouped together for larger ε as well. The hierarchy of ε values also leads to a hierarchy of error bounds. This allows for explicitly weighing compression versus accuracy when choosing specific ε values to run ACP with and enables interpretability between the different models.

Projects