Learning disentangled representations with variational autoencoders (VAEs) is often attributed to the regularisation component of the loss. In this work, we highlight the interaction between data and the reconstruction term of the loss as the main contributor to disentanglement in VAEs. We note that standardised benchmark datasets are constructed in ways that are conducive to learning what appear to be disentangled representations. We design an intuitive adversarial dataset that exploits this mechanism to break existing state-of-the-art disentanglement frameworks. Finally, we supply a solution that enables disentanglement by modifying the reconstruction loss, affecting how VAEs perceive distances between data points.Comment: 16 pages, 11 fi...
The ability of Variational Autoencoders to learn disentangled representations has made them appealin...
We propose TopDis (Topological Disentanglement), a method for learning disentangled representations ...
Obtaining disentangled representations is a goal sought after to make A.I. models more interpretable...
Thesis (Master's)--University of Washington, 2022In this thesis, we conduct a thorough study of "Var...
A large part of the literature on learning disentangled representations focuses on variational autoe...
We develop a generalisation of disentanglement in variational autoencoders (VAEs)—decomposition of t...
Variational autoencoders (VAEs) have recently been used for unsupervised disentanglement learning of...
We present a self-supervised method to disentangle factors of variation in high-dimensional data tha...
Variational AutoEncoders (VAEs) provide a means to generate representational latent embeddings. Pre...
Variational AutoEncoders (VAEs) provide a means to generate representational latent embeddings. Prev...
The ability of Variational Autoencoders (VAEs) to learn disentangled representations has made them p...
Disentangled representation learning has undoubtedly benefited from objective function surgery. Howe...
Representation learning, the task of extracting meaningful representations of high-dimensional data,...
Disentanglement is the task of learning representations that identify and separate factors that expl...
International audienceTwo recent works have shown the benefit of modeling both high-level factors an...
The ability of Variational Autoencoders to learn disentangled representations has made them appealin...
We propose TopDis (Topological Disentanglement), a method for learning disentangled representations ...
Obtaining disentangled representations is a goal sought after to make A.I. models more interpretable...
Thesis (Master's)--University of Washington, 2022In this thesis, we conduct a thorough study of "Var...
A large part of the literature on learning disentangled representations focuses on variational autoe...
We develop a generalisation of disentanglement in variational autoencoders (VAEs)—decomposition of t...
Variational autoencoders (VAEs) have recently been used for unsupervised disentanglement learning of...
We present a self-supervised method to disentangle factors of variation in high-dimensional data tha...
Variational AutoEncoders (VAEs) provide a means to generate representational latent embeddings. Pre...
Variational AutoEncoders (VAEs) provide a means to generate representational latent embeddings. Prev...
The ability of Variational Autoencoders (VAEs) to learn disentangled representations has made them p...
Disentangled representation learning has undoubtedly benefited from objective function surgery. Howe...
Representation learning, the task of extracting meaningful representations of high-dimensional data,...
Disentanglement is the task of learning representations that identify and separate factors that expl...
International audienceTwo recent works have shown the benefit of modeling both high-level factors an...
The ability of Variational Autoencoders to learn disentangled representations has made them appealin...
We propose TopDis (Topological Disentanglement), a method for learning disentangled representations ...
Obtaining disentangled representations is a goal sought after to make A.I. models more interpretable...