Skip to main content Skip to main navigation

Publication

Sparse Group Restricted Boltzmann Machines

Heng Luo; Rui Min Shen; Changyong Niu; Carsten Ullrich
In: W. Burgard; D. Roth (Hrsg.). Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence. AAAI Conference on Artificial Intelligence (AAAI), August 7-11, San Francisco, California, USA, AAAI Press, 2011.

Abstract

Since learning in Boltzmann machines is typically quite slow, there is a need to restrict connections within hidden layers. However, the resulting states of hidden units exhibit statistical dependencies. Based on this ob- servation, we propose using l1/l2 regularization upon the activation probabilities of hidden units in restricted Boltzmann machines to capture the local dependencies among hidden units. This regularization not only en- courages hidden units of many groups to be inactive given observed data but also makes hidden units within a group compete with each other for modeling observed data. Thus, the l1/l2 regularization on RBMs yields sparsity at both the group and the hidden unit levels. We call RBMs trained with the regularizer sparse group RBMs (SGRBMs). The proposed SGRBMs are applied to model patches of natural images, handwritten dig- its and OCR English letters. Then to emphasize that SGRBMs can learn more discriminative features we ap- plied SGRBMs to pretrain deep networks for classifi- cation tasks. Furthermore, we illustrate the regularizer can also be applied to deep Boltzmann machines, which lead to sparse group deep Boltzmann machines. When adapted to the MNIST data set, a two-layer sparse group Boltzmann machine achieves an error rate of 0.84%, which is, to our knowledge, the best published result on the permutation-invariant version of the MNIST task.