Skip to main content Skip to main navigation


On Lifted PageRank, Kalman Filter and Towards Lifted Linear Program Solving

Babak Ahmadi; Martin Mladenov; Kristian Kersting; Scott Sanner
In: Myra Spiliopoulou; Andreas Nürnberger; René Schult (Hrsg.). Report of the symposium "Lernen, Wissen, Adaptivität 2011" of the GI special interest groups KDML, IR and WM. GI-Workshop-Tage "Lernen, Wissen, Adaptivität" (LWA-2011), September 28-30, Pages 35-42, Fakultät für Informatik, Otto-von-Guericke-Universität Magdeburg, 2011.


Lifted message passing algorithms exploit repeated structure within a given graphical model to answer queries efficiently. Given evidence, they construct a lifted network of supernodes and superpotentials corresponding to sets of nodes and potentials that are indistinguishable given the evidence. Recently, efficient algorithms were presented for updating the structure of an existing lifted network with incremental changes to the evidence. In the inference stage, however, current algorithms need to construct a separate lifted network for each evidence case and run a modified message passing algorithm on each lifted network separately. Consequently, symmetries across the inference tasks are not exploited. In this paper, we present a novel lifted message passing technique that exploits symmetries across multiple evidence cases. The benefits of this multi-evidence lifted inference are shown for several important AI tasks such as solving linear programs, computing personalized PageRanks and Kalman filters via multi-evidence lifted Gaussian belief propagation.