We present a self-supervised method to disentangle factors of variation in high-dimensional data that does not rely on prior knowledge of the underlying variation profile (e.g., no assumptions on the number or distribution of the individual latent variables to be extracted). In this method which we call NashAE, high-dimensional feature disentanglement is accomplished in the low-dimensional latent space of a standard autoencoder (AE) by promoting the discrepancy between each encoding element and information of the element recovered from all other encoding elements. Disentanglement is promoted efficiently by framing this as a minmax game between the AE and an ensemble of regression networks which each provide an estimate of an element conditi...
The idea behind the unsupervised learning of disentangled representations is that real-world data is...
Learning disentangled representations with variational autoencoders (VAEs) is often attributed to th...
Deep latent-variable models learn representations of high-dimensional data in an unsupervised manner...
Representation disentanglement is an important goal of representation learning that benefits various...
A central problem in unsupervised deep learning is how to find useful representations of high-dimens...
In many scenarios it is natural to assume that a set of data is generated given a set of latent fact...
When working with textual data, a natural application of disentangled representations is fair classi...
A large part of the literature on learning disentangled representations focuses on variational autoe...
Autoencoders exhibit impressive abilities to embed the data manifold into a low-dimensional latent s...
International audienceIn recent years, the rapid development of deep learning approaches has paved t...
Representation learning, the task of extracting meaningful representations of high-dimensional data,...
We address the problem of unsupervised disentanglement of latent representations learnt via deep gen...
Disentangled representation learning has undoubtedly benefited from objective function surgery. Howe...
Thesis (Master's)--University of Washington, 2022In this thesis, we conduct a thorough study of "Var...
Disentanglement is a useful property in representation learning which increases the interpretability...
The idea behind the unsupervised learning of disentangled representations is that real-world data is...
Learning disentangled representations with variational autoencoders (VAEs) is often attributed to th...
Deep latent-variable models learn representations of high-dimensional data in an unsupervised manner...
Representation disentanglement is an important goal of representation learning that benefits various...
A central problem in unsupervised deep learning is how to find useful representations of high-dimens...
In many scenarios it is natural to assume that a set of data is generated given a set of latent fact...
When working with textual data, a natural application of disentangled representations is fair classi...
A large part of the literature on learning disentangled representations focuses on variational autoe...
Autoencoders exhibit impressive abilities to embed the data manifold into a low-dimensional latent s...
International audienceIn recent years, the rapid development of deep learning approaches has paved t...
Representation learning, the task of extracting meaningful representations of high-dimensional data,...
We address the problem of unsupervised disentanglement of latent representations learnt via deep gen...
Disentangled representation learning has undoubtedly benefited from objective function surgery. Howe...
Thesis (Master's)--University of Washington, 2022In this thesis, we conduct a thorough study of "Var...
Disentanglement is a useful property in representation learning which increases the interpretability...
The idea behind the unsupervised learning of disentangled representations is that real-world data is...
Learning disentangled representations with variational autoencoders (VAEs) is often attributed to th...
Deep latent-variable models learn representations of high-dimensional data in an unsupervised manner...