Skip to main content Skip to main navigation


Learning Markov Logic Networks via Functional Gradient Boosting

Tushar Khot; Sriraam Natarajan; Kristian Kersting; Jude W. Shavlik
In: Diane J. Cook; Jian Pei; Wei Wang; Osmar R. Zaïane; Xindong Wu (Hrsg.). 11th IEEE International Conference on Data Mining. IEEE International Conference on Data Mining (ICDM-2011), December 11-14, Vancouver, BC, Canada, Pages 320-329, IEEE Computer Society, 2011.


Recent years have seen a surge of interest in Statistical Relational Learning (SRL) models that combine logic with probabilities. One prominent example is Markov Logic Networks (MLNs). While MLNs are indeed highly expressive, this expressiveness comes at a cost. Learning MLNs is a hard problem and therefore has attracted much interest in the SRL community. Current methods for learning MLNs follow a two-step approach: first, perform a search through the space of possible clauses and then learn appropriate weights for these clauses. We propose to take a different approach, namely to learn both the weights and the structure of the MLN simultaneously. Our approach is based on functional gradient boosting where the problem of learning MLNs is turned into a series of relational functional approximation problems. We use two kinds of representations for the gradients: clause-based and tree-based. Our experimental evaluation on several benchmark data sets demonstrates that our new approach can learn MLNs as good or better than those found with state-of-the-art methods, but often in a fraction of the time.

Weitere Links