Fitting probabilistic models to data is often difficult, due to the general intractability of the partition function and its derivatives. In this dissertation we propose a new parameter estimation technique that does not require computing an intractable normalization factor or sampling from the equilibrium distribution of the model. This is achieved by establishing dynamics that would transform the observed data distribution into the model distribution, and then setting as the objective the minimization of the KL divergence between the data distribution and the distribution produced by running the dynamics for an infinitesimal time. Score matching, minimum velocity learning, and certain forms of contrastive divergence are shown to be specia...
This paper analyses the Contrastive Divergence algorithm for learning statistical parameters. We rel...
Transferring information from data to models is crucial to many scientific disciplines. Typically, t...
In this paper, we discuss regularisation in online/sequential learning algorithms. In environments w...
Fitting probabilistic models to data is often difficult, due to the general intractability of the pa...
High dimensional probabilistic models are used for many modern scientific and engineering data analy...
Energy-based models are popular in machine learning due to the elegance of their formulation and the...
Simulation-based inference enables learning the parameters of a model even when its likelihood canno...
When used to learn high dimensional parametric probabilistic models, the clas-sical maximum likeliho...
Probabilistic graphical models provide a natural framework for the representation of complex systems...
The modeling of the score evolution by a single time-dependent neural network in Diffusion Probabili...
Simulation-based inference enables learning the parameters of a model even when its likelihood canno...
International audienceMaximum entropy models provide the least constrained probability distributions...
NOTE: Text or symbols not renderable in plain ASCII are indicated by [...]. Abstract is included in ...
Probabilistic inference is at the core of many recent advances in machine learning. Unfortunately, ...
We study a normalizing flow in the latent space of a top-down generator model, in which the normaliz...
This paper analyses the Contrastive Divergence algorithm for learning statistical parameters. We rel...
Transferring information from data to models is crucial to many scientific disciplines. Typically, t...
In this paper, we discuss regularisation in online/sequential learning algorithms. In environments w...
Fitting probabilistic models to data is often difficult, due to the general intractability of the pa...
High dimensional probabilistic models are used for many modern scientific and engineering data analy...
Energy-based models are popular in machine learning due to the elegance of their formulation and the...
Simulation-based inference enables learning the parameters of a model even when its likelihood canno...
When used to learn high dimensional parametric probabilistic models, the clas-sical maximum likeliho...
Probabilistic graphical models provide a natural framework for the representation of complex systems...
The modeling of the score evolution by a single time-dependent neural network in Diffusion Probabili...
Simulation-based inference enables learning the parameters of a model even when its likelihood canno...
International audienceMaximum entropy models provide the least constrained probability distributions...
NOTE: Text or symbols not renderable in plain ASCII are indicated by [...]. Abstract is included in ...
Probabilistic inference is at the core of many recent advances in machine learning. Unfortunately, ...
We study a normalizing flow in the latent space of a top-down generator model, in which the normaliz...
This paper analyses the Contrastive Divergence algorithm for learning statistical parameters. We rel...
Transferring information from data to models is crucial to many scientific disciplines. Typically, t...
In this paper, we discuss regularisation in online/sequential learning algorithms. In environments w...