Density estimation, compression, and data generation are crucial tasks in artificial intelligence. Variational Auto-Encoders (VAEs) constitute a single framework to achieve these goals. Here, we present a novel class of generative models, called self-supervised Variational Auto-Encoder (selfVAE), which utilizes deterministic and discrete transformations of data. This class of models allows both conditional and unconditional sampling while simplifying the objective function. First, we use a single self-supervised transformation as a latent variable, where the transformation is either downscaling or edge detection. Next, we consider a hierarchical architecture, i.e., multiple transformations, and we show its benefits compared to the VAE. The ...
We propose a new structure for the variational auto-encoders (VAEs) prior, with the weakly informati...
Variational autoencoders (VAEs) are a popular class of deep generative models with many variants and...
Generative Adversarial Networks(GAN) are trained to generate images from random noise vectors, but o...
Density estimation, compression, and data generation are crucial tasks in artificial intelligence. V...
Autoencoders are a self-supervised learning system where, during training, the output is an approxim...
In end-to-end optimized learned image compression, it is standard practice to use a convolutional va...
Variational auto-encoders (VAEs) are a powerful approach to unsupervised learning. They enable scala...
This paper explores two useful modifications of the recent variational autoencoder (VAE), a popular ...
We claim that a source of severe failures for Variational Auto-Encoders is the choice of the distrib...
In this article we introduce the notion of Split Variational Autoencoder (SVAE), whose output x^ is ...
We use the variational auto-encoders (VAE) to transform the set of finite automata into acontinuous ...
In the past few years Generative models have become an interesting topic in the field of Machine Lea...
Thesis (Master's)--University of Washington, 2022In this thesis, we conduct a thorough study of "Var...
We propose a multi-layer variational autoencoder method, we call HR-VQVAE, that learns hierarchical ...
Deep generative models allow us to learn hidden representations of data and generate new examples. T...
We propose a new structure for the variational auto-encoders (VAEs) prior, with the weakly informati...
Variational autoencoders (VAEs) are a popular class of deep generative models with many variants and...
Generative Adversarial Networks(GAN) are trained to generate images from random noise vectors, but o...
Density estimation, compression, and data generation are crucial tasks in artificial intelligence. V...
Autoencoders are a self-supervised learning system where, during training, the output is an approxim...
In end-to-end optimized learned image compression, it is standard practice to use a convolutional va...
Variational auto-encoders (VAEs) are a powerful approach to unsupervised learning. They enable scala...
This paper explores two useful modifications of the recent variational autoencoder (VAE), a popular ...
We claim that a source of severe failures for Variational Auto-Encoders is the choice of the distrib...
In this article we introduce the notion of Split Variational Autoencoder (SVAE), whose output x^ is ...
We use the variational auto-encoders (VAE) to transform the set of finite automata into acontinuous ...
In the past few years Generative models have become an interesting topic in the field of Machine Lea...
Thesis (Master's)--University of Washington, 2022In this thesis, we conduct a thorough study of "Var...
We propose a multi-layer variational autoencoder method, we call HR-VQVAE, that learns hierarchical ...
Deep generative models allow us to learn hidden representations of data and generate new examples. T...
We propose a new structure for the variational auto-encoders (VAEs) prior, with the weakly informati...
Variational autoencoders (VAEs) are a popular class of deep generative models with many variants and...
Generative Adversarial Networks(GAN) are trained to generate images from random noise vectors, but o...