Maximum entropy PDF projection (MEPP) is a way to construct generative models from feature transformations. Corresponding to each dimension-reducing feature mapping, such as a feed-forward neural network or an algorithm to calculate linear-prediction coefficients from time series, and given a prior distribution for the features, there exists a unique generative model for the input data, which subject to mild requirements, is maximum entropy (MaxEnt) among all probability density functions (PDFs) that are consistent with the given feature prior. In this paper, we consider the problem of sampling from these MaxEnt projected PDFs. The sampling process consists of drawing a sample from the given feature prior distribution, then drawing samples ...
Likelihood-based, or explicit, deep generative models use neural networks to construct flexible high...
A common statistical situation concerns inferring an unknown distribution Q(x) from a known distribu...
We propose a framework for learning hidden-variable models by optimizing entropies, in which entropy...
We review recent theoretical results in maximum entropy (MaxEnt) PDF projection that provide a theor...
International audienceMaximum entropy models provide the least constrained probability distributions...
International audienceWith the possibility of interpreting data using increasingly complex models, w...
AbstractNowadays, one of the most changeling points in statistics is the analysis of high dimensiona...
Figure 1: From left to right: The original model with 14 million samples is adaptively subsampled to...
International audienceIn a recent paper, the authors proposed a general methodology for probabilisti...
The success of evolutionary algorithms, in particular Factorized Distribution Algorithms (FDA), for ...
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sam...
International audienceWe describe a new methodology for constructing probability measures from obser...
In this thesis, we present an MCMC-based method to extract near-uniform samples from a level set of ...
The maximum entropy principle (MEP) is one of the most prominent methods to investigate and model co...
We introduce a new perspective on spectral dimensionality reduction which views these methods as Gau...
Likelihood-based, or explicit, deep generative models use neural networks to construct flexible high...
A common statistical situation concerns inferring an unknown distribution Q(x) from a known distribu...
We propose a framework for learning hidden-variable models by optimizing entropies, in which entropy...
We review recent theoretical results in maximum entropy (MaxEnt) PDF projection that provide a theor...
International audienceMaximum entropy models provide the least constrained probability distributions...
International audienceWith the possibility of interpreting data using increasingly complex models, w...
AbstractNowadays, one of the most changeling points in statistics is the analysis of high dimensiona...
Figure 1: From left to right: The original model with 14 million samples is adaptively subsampled to...
International audienceIn a recent paper, the authors proposed a general methodology for probabilisti...
The success of evolutionary algorithms, in particular Factorized Distribution Algorithms (FDA), for ...
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sam...
International audienceWe describe a new methodology for constructing probability measures from obser...
In this thesis, we present an MCMC-based method to extract near-uniform samples from a level set of ...
The maximum entropy principle (MEP) is one of the most prominent methods to investigate and model co...
We introduce a new perspective on spectral dimensionality reduction which views these methods as Gau...
Likelihood-based, or explicit, deep generative models use neural networks to construct flexible high...
A common statistical situation concerns inferring an unknown distribution Q(x) from a known distribu...
We propose a framework for learning hidden-variable models by optimizing entropies, in which entropy...