Learning from multiple sources of information is an important problem in machine-learning research. The key challenges are learning representations and formulating inference methods that take into account the complementarity and redundancy of various information sources. In this paper we formulate a variational autoencoder based multi-source learning framework in which each encoder is conditioned on a different information source. This allows us to relate the sources via the shared latent variables by computing divergence measures between individual source’s posterior approximations. We explore a variety of options to learn these encoders and to integrate the beliefs they compute into a consistent posterior approximation. We visualise learn...
A deep latent variable model is a powerful tool for modelling complex distributions. However, in ord...
Learning flexible latent representation of observed data is an important precursor for most downstre...
Newly developed machine learning algorithms are heavily dependent on the choice of data representati...
This thesis introduces the Mutual Information Machine (MIM), an autoencoder model for learning j...
We propose a method for learning the dependency structure between latent variables in deep latent va...
Bayesian statistics is a powerful framework for modeling the world and reasoning over uncertainty. I...
Highly expressive directed latent variable mod-els, such as sigmoid belief networks, are diffi-cult ...
Highly expressive directed latent variable mod-els, such as sigmoid belief networks, are diffi-cult ...
A key advance in learning generative models is the use of amortized inference distributions that are...
Multi-Entity Dependence Learning (MEDL) explores conditional correlations among multiple entities. T...
We consider the problem of learning accurate models from multiple sources of nearby data. Given di...
Variational autoencoders (VAEs) learn representations of data by jointly training a probabilistic en...
2018-08-14Mutual information (MI) has been successfully applied to a wide variety of domains due to ...
Knowledge distillation has been used to capture the knowledge of a teacher model and distill it into...
In the past few years Generative models have become an interesting topic in the field of Machine Lea...
A deep latent variable model is a powerful tool for modelling complex distributions. However, in ord...
Learning flexible latent representation of observed data is an important precursor for most downstre...
Newly developed machine learning algorithms are heavily dependent on the choice of data representati...
This thesis introduces the Mutual Information Machine (MIM), an autoencoder model for learning j...
We propose a method for learning the dependency structure between latent variables in deep latent va...
Bayesian statistics is a powerful framework for modeling the world and reasoning over uncertainty. I...
Highly expressive directed latent variable mod-els, such as sigmoid belief networks, are diffi-cult ...
Highly expressive directed latent variable mod-els, such as sigmoid belief networks, are diffi-cult ...
A key advance in learning generative models is the use of amortized inference distributions that are...
Multi-Entity Dependence Learning (MEDL) explores conditional correlations among multiple entities. T...
We consider the problem of learning accurate models from multiple sources of nearby data. Given di...
Variational autoencoders (VAEs) learn representations of data by jointly training a probabilistic en...
2018-08-14Mutual information (MI) has been successfully applied to a wide variety of domains due to ...
Knowledge distillation has been used to capture the knowledge of a teacher model and distill it into...
In the past few years Generative models have become an interesting topic in the field of Machine Lea...
A deep latent variable model is a powerful tool for modelling complex distributions. However, in ord...
Learning flexible latent representation of observed data is an important precursor for most downstre...
Newly developed machine learning algorithms are heavily dependent on the choice of data representati...