Faster Attend-Infer-Repeat with Tractable Probabilistic ModelsKarl Stelzner; Robert Peharz; Kristian Kersting
In: Kamalika Chaudhuri; Ruslan Salakhutdinov (Hrsg.). Proceedings of the 36th International Conference on Machine Learning. International Conference on Machine Learning (ICML-2019), June 9-15, Long Beach, California, USA, Pages 5966-5975, Proceedings of Machine Learning Research, Vol. 97, PMLR, 2019.
The recent Attend-Infer-Repeat (AIR) framework marks a milestone in structured probabilistic modeling, as it tackles the challenging problem of unsupervised scene understanding via Bayesian inference. AIR expresses the composition of visual scenes from individual objects, and uses variational autoencoders to model the appearance of those objects. However, inference in the overall model is highly intractable, which hampers its learning speed and makes it prone to suboptimal solutions. In this paper, we show that the speed and robustness of learning in AIR can be considerably improved by replacing the intractable object representations with tractable probabilistic models. In particular, we opt for sum-product networks (SPNs), expressive deep probabilistic models with a rich set of tractable inference routines. The resulting model, called SuPAIR, learns an order of magnitude faster than AIR, treats object occlusions in a consistent manner, and allows for the inclusion of a background noise model, improving the robustness of Bayesian scene understanding.