Deep latent-variable models learn representations of high-dimensional data in an unsupervised manner. A number of recent efforts have focused on learning representations that disentangle statistically independent axes of variation by introducing modifications to the standard objective function. These approaches generally assume a simple diagonal Gaussian prior and as a result are not able to reliably disentangle discrete factors of variation. We propose a two-level hierarchical objective to control relative degree of statistical independence between blocks of variables and individual variables within blocks. We derive this objective as a generalization of the evidence lower bound, which allows us to explicitly represent the trade-offs betwe...
Real-world data typically include discrete generative factors, such as category labels and the exist...
Disentangled representation learning has undoubtedly benefited from objective function surgery. Howe...
© 36th International Conference on Machine Learning, ICML 2019. All rights reserved.We address the p...
We address the problem of unsupervised disentanglement of latent representations learnt via deep gen...
Unsupervised disentangled representation learning is one of the foundational methods to learn interp...
The idea behind the unsupervised learning of disentangled representations is that real-world data is...
Learning reliable and interpretable representations is one of the fundamental challenges in machine ...
Real world data is not random: The variability in the data-sets that arise in computer vision, sign...
We would like to learn a representation of the data that reflects the semantics behind a specific gr...
A large part of the literature on learning disentangled representations focuses on variational autoe...
We develop a generalisation of disentanglement in variational autoencoders (VAEs)—decomposition of t...
We present a self-supervised method to disentangle factors of variation in high-dimensional data tha...
Variational autoencoders (VAEs) learn representations of data by jointly training a probabilistic en...
In many scenarios it is natural to assume that a set of data is generated given a set of latent fact...
Representation learners that disentangle factors of variation have already proven to be important in...
Real-world data typically include discrete generative factors, such as category labels and the exist...
Disentangled representation learning has undoubtedly benefited from objective function surgery. Howe...
© 36th International Conference on Machine Learning, ICML 2019. All rights reserved.We address the p...
We address the problem of unsupervised disentanglement of latent representations learnt via deep gen...
Unsupervised disentangled representation learning is one of the foundational methods to learn interp...
The idea behind the unsupervised learning of disentangled representations is that real-world data is...
Learning reliable and interpretable representations is one of the fundamental challenges in machine ...
Real world data is not random: The variability in the data-sets that arise in computer vision, sign...
We would like to learn a representation of the data that reflects the semantics behind a specific gr...
A large part of the literature on learning disentangled representations focuses on variational autoe...
We develop a generalisation of disentanglement in variational autoencoders (VAEs)—decomposition of t...
We present a self-supervised method to disentangle factors of variation in high-dimensional data tha...
Variational autoencoders (VAEs) learn representations of data by jointly training a probabilistic en...
In many scenarios it is natural to assume that a set of data is generated given a set of latent fact...
Representation learners that disentangle factors of variation have already proven to be important in...
Real-world data typically include discrete generative factors, such as category labels and the exist...
Disentangled representation learning has undoubtedly benefited from objective function surgery. Howe...
© 36th International Conference on Machine Learning, ICML 2019. All rights reserved.We address the p...