© 36th International Conference on Machine Learning, ICML 2019. All rights reserved.We address the problem of unsupervised disentanglement of discrete and continuous explanatory factors of data. We first show a simple procedure for minimizing the total correlation of the continuous latent variables without having to use a discriminator network or perform importance sampling, via cascading the information flow in the β-vae framework. Furthermore, we propose a method which avoids offloading the entire burden of jointly modeling the continuous and discrete factors to the variational encoder by employing a separate discrete inference procedure. This leads to an interesting alternating minimization problem which switches between finding the most...
We present a self-supervised method to disentangle factors of variation in high-dimensional data tha...
Variational autoencoders (VAEs) learn representations of data by jointly training a probabilistic en...
Variational autoencoders (VAEs) have recently been used for unsupervised disentanglement learning of...
Real-world data typically include discrete generative factors, such as category labels and the exist...
Unsupervised disentangled representation learning is one of the foundational methods to learn interp...
Deep latent-variable models learn representations of high-dimensional data in an unsupervised manner...
A large part of the literature on learning disentangled representations focuses on variational autoe...
International audienceIn recent years, the rapid development of deep learning approaches has paved t...
Disentangled sequential autoencoders (DSAEs) represent a class of probabilistic graphical models tha...
Disentanglement is at the forefront of unsupervised learning, as disentangled representations of dat...
International audienceTwo recent works have shown the benefit of modeling both high-level factors an...
The idea behind the unsupervised learning of disentangled representations is that real-world data is...
In many scenarios it is natural to assume that a set of data is generated given a set of latent fact...
We would like to learn a representation of the data that reflects the semantics behind a specific gr...
A fundamental challenge for machine learning models is generalizing to out-of-distribution (OOD) dat...
We present a self-supervised method to disentangle factors of variation in high-dimensional data tha...
Variational autoencoders (VAEs) learn representations of data by jointly training a probabilistic en...
Variational autoencoders (VAEs) have recently been used for unsupervised disentanglement learning of...
Real-world data typically include discrete generative factors, such as category labels and the exist...
Unsupervised disentangled representation learning is one of the foundational methods to learn interp...
Deep latent-variable models learn representations of high-dimensional data in an unsupervised manner...
A large part of the literature on learning disentangled representations focuses on variational autoe...
International audienceIn recent years, the rapid development of deep learning approaches has paved t...
Disentangled sequential autoencoders (DSAEs) represent a class of probabilistic graphical models tha...
Disentanglement is at the forefront of unsupervised learning, as disentangled representations of dat...
International audienceTwo recent works have shown the benefit of modeling both high-level factors an...
The idea behind the unsupervised learning of disentangled representations is that real-world data is...
In many scenarios it is natural to assume that a set of data is generated given a set of latent fact...
We would like to learn a representation of the data that reflects the semantics behind a specific gr...
A fundamental challenge for machine learning models is generalizing to out-of-distribution (OOD) dat...
We present a self-supervised method to disentangle factors of variation in high-dimensional data tha...
Variational autoencoders (VAEs) learn representations of data by jointly training a probabilistic en...
Variational autoencoders (VAEs) have recently been used for unsupervised disentanglement learning of...