Efficient approximation lies at the heart of large-scale machine learning problems. In this paper, we propose a novel, robust maximum entropy algorithm, which is capable of dealing with hundreds of moments and allows for computationally efficient approximations. We showcase the usefulness of the proposed method, its equivalence to constrained Bayesian variational inference and demonstrate its superiority over existing approaches in two applications, namely, fast log determinant estimation and information-theoretic Bayesian optimisation
International audienceWe propose a new family of latent variable models called max-margin min-entrop...
The concept of maximum entropy can be traced back along multiple threads to Biblical times. Only rec...
We consider the problem of estimating a probability distribution that maximizes the entropy while sa...
We present a new statistical learning paradigm for Boltzmann machines based on a new inference pri...
We present an extension to Jaynes’ maximum entropy principle that incorporates latent variables. The...
We propose a framework for learning hidden-variable models by optimizing entropies, in which entropy...
Maximum entropy (MaxEnt) framework has been studied extensively in supervised learning. Here, the go...
Conventionally, the maximum likelihood (ML) criterion is applied to train a deep belief network (DBN...
Optimisation problems typically involve finding the ground state (i.e. the minimum energy configurat...
Estimation of Distribution Algorithms (EDA) have been proposed as an extension of genetic algorithms...
Abstract The well known maximum-entropy principle due to Jaynes, which states that given mean parame...
International audienceMaximum entropy models provide the least constrained probability distributions...
In this letter, we elaborate on some of the issues raised by a recent paper by Neapolitan and Jiang ...
In this thesis we start by providing some detail regarding how we arrived at our present understandi...
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sam...
International audienceWe propose a new family of latent variable models called max-margin min-entrop...
The concept of maximum entropy can be traced back along multiple threads to Biblical times. Only rec...
We consider the problem of estimating a probability distribution that maximizes the entropy while sa...
We present a new statistical learning paradigm for Boltzmann machines based on a new inference pri...
We present an extension to Jaynes’ maximum entropy principle that incorporates latent variables. The...
We propose a framework for learning hidden-variable models by optimizing entropies, in which entropy...
Maximum entropy (MaxEnt) framework has been studied extensively in supervised learning. Here, the go...
Conventionally, the maximum likelihood (ML) criterion is applied to train a deep belief network (DBN...
Optimisation problems typically involve finding the ground state (i.e. the minimum energy configurat...
Estimation of Distribution Algorithms (EDA) have been proposed as an extension of genetic algorithms...
Abstract The well known maximum-entropy principle due to Jaynes, which states that given mean parame...
International audienceMaximum entropy models provide the least constrained probability distributions...
In this letter, we elaborate on some of the issues raised by a recent paper by Neapolitan and Jiang ...
In this thesis we start by providing some detail regarding how we arrived at our present understandi...
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sam...
International audienceWe propose a new family of latent variable models called max-margin min-entrop...
The concept of maximum entropy can be traced back along multiple threads to Biblical times. Only rec...
We consider the problem of estimating a probability distribution that maximizes the entropy while sa...