Skip to main content Skip to main navigation


Right for the Right Latent Factors: Debiasing Generative Models via Disentanglement

Xiaoting Shao; Karl Stelzner; Kristian Kersting
In: Computing Research Repository eprint Journal (CoRR), Vol. abs/2202.00391, Pages 0-10, arXiv, 2022.


A key assumption of most statistical machine learning methods is that they have access to independent samples from the distribution of data they encounter at test time. As such, these methods often perform poorly in the face of biased data, which breaks this assumption. In particular, machine learning models have been shown to exhibit Clever-Hans-like behaviour, meaning that spurious correlations in the training set are inadvertently learnt. A number of works have been proposed to revise deep classifiers to learn the right correlations. However, generative models have been overlooked so far. We observe that generative models are also prone to Clever-Hans-like behaviour. To counteract this issue, we propose to debias generative models by disentangling their internal representations, which is achieved via human feedback. Our experiments show that this is effective at removing bias even when human feedback covers only a small fraction of the desired distribution. In addition, we achieve strong disentanglement results in a quantitative comparison with recent methods.

Weitere Links