Skip to main content Skip to main navigation


MapReduce Lifting for Belief Propagation

Babak Ahmadi; Kristian Kersting; Sriraam Natarajan
In: Statistical Relational Artificial Intelligence, Papers from the 2013 AAAI Workshop. AAAI Conference on Artificial Intelligence (AAAI-2013), July 15, Bellevue, Washington, USA, AAAI Technical Report, Vol. WS-13-16, AAAI, 2013.


Judging by the increasing impact of machine learning on large-scale data analysis in the last decade, one can anticipate a substantial growth in diversity of the machine learning applications for "big data" over the next decade. This exciting new opportunity, however, also raises many challenges. One of them is scaling inference within and training of graphical models. Typical ways to address this scaling issue are inference by approximate message passing, stochastic gradients, and MapReduce, among others. Often, we encounter inference and training problems with symmetries and redundancies in the graph structure. It has been shown that inference and training can indeed benefit from exploiting symmetries, for example by lifting loopy belief propagation (LBP). That is, a model is compressed by grouping nodes together that send and receive identical messages so that a modified LBP running on the lifted graph yields the same marginals as LBP on the original one, but often in a fraction of time. By establishing a link between lifting and radix sort, we show that lifting is MapReduce-able and thus combine the two orthogonal approaches to scaling inference, namely exploiting symmetries and employing parallel computations.

Weitere Links