Relevance Factor VAE: Learning and Identifying Disentangled Factors

Minyoung KimYuting WangPritish Sahu, and Vladimir Pavlovic.


We propose a novel VAE-based deep auto- encoder model that can learn disentangled latent representations in a fully unsupervised manner, endowed with the ability to identify all meaningful sources of variation and their cardinality. Our model, dubbed Relevance-Factor-VAE, leverages the total correlation (TC) in the latent space to achieve the disentanglement goal, but also addresses the key issue of existing approaches which cannot distinguish between meaningful and nuisance factors of latent variation, often the source of considerable degradation in disentanglement performance. We tackle this issue by introducing the so-called relevance indicator variables that can be automatically learned from data, together with the VAE parameters. Our model effectively focuses the TC loss onto the relevant factors only by tolerating large prior KL divergences, a desideratum justified by our semi-parametric theoretical analysis. Using a suite of disentanglement metrics, including a newly proposed one, as well as qualitative evidence, we demonstrate that our model outperforms existing methods across several challenging benchmark datasets.

The full paper at



  • M. Kim, Y. Wang, P. Sahu, and V. Pavlovic, “Relevance Factor VAE: Learning and Identifying Disentangled Factors,” CoRR, vol. abs/1902.01568, 2019.
    Author = {Minyoung Kim and Yuting Wang and Pritish Sahu and Vladimir Pavlovic},
    Journal = {CoRR},
    Title = {Relevance Factor {VAE:} Learning and Identifying Disentangled Factors},
    Volume = {abs/1902.01568},
    Year = {2019}}

Leave a Reply

Your email address will not be published. Required fields are marked *