Minimization of a sum-of-squares or cross-entropy error function leads to network out-puts which approximate the conditional averages of the target data, conditioned on the input vector. For classications problems, with a suitably chosen target coding scheme, these averages represent the posterior probabilities of class membership, and so can be regarded as optimal. For problems involving the prediction of continuous variables, how-ever, the conditional averages provide only a very limited description of the properties of the target variables. This is particularly true for problems in which the mapping to be learned is multi-valued, as often arises in the solution of inverse problems, since the average of several correct target values is no...
International audienceOptimal transport (OT) provides effective tools for comparing and mapping prob...
Generative Adversarial Networks (GANs) have gained significant attention in recent years, with impre...
In this paper we introduce an algorithm for learning hybrid Bayesian networks from data. The result ...
Minimization of a sum-of-squares or cross-entropy error function leads to network outputs which appr...
We have proposed a novel robust inversion-based neurocontroller that searches for the optimal contro...
summary:Recently a new interesting architecture of neural networks called “mixture of experts” has b...
Abstract: We have proposed a novel robust inversion-based neurocontroller that searches for the opti...
A self-organizing mixture network (SOMN) is derived for learning arbitrary density functions. The ne...
Mixture Density Networks are a principled method to model conditional probability density functions ...
Abstract—A self-organizing mixture network (SOMN) is derived for learning arbitrary density function...
Mixtures of polynomials (MoPs) are a non-parametric density estimation technique for hybrid Bayesian...
This paper describes a method by which a neural network learns to fit a distribution to sample data....
Predicting conditional probability densities with neural networks requires complex (at least two-hid...
We consider the problem of learning density mixture models for Classification. Traditional learning ...
Neural networks (NNs) with random weights are an interesting alternative to conventional NNs that ar...
International audienceOptimal transport (OT) provides effective tools for comparing and mapping prob...
Generative Adversarial Networks (GANs) have gained significant attention in recent years, with impre...
In this paper we introduce an algorithm for learning hybrid Bayesian networks from data. The result ...
Minimization of a sum-of-squares or cross-entropy error function leads to network outputs which appr...
We have proposed a novel robust inversion-based neurocontroller that searches for the optimal contro...
summary:Recently a new interesting architecture of neural networks called “mixture of experts” has b...
Abstract: We have proposed a novel robust inversion-based neurocontroller that searches for the opti...
A self-organizing mixture network (SOMN) is derived for learning arbitrary density functions. The ne...
Mixture Density Networks are a principled method to model conditional probability density functions ...
Abstract—A self-organizing mixture network (SOMN) is derived for learning arbitrary density function...
Mixtures of polynomials (MoPs) are a non-parametric density estimation technique for hybrid Bayesian...
This paper describes a method by which a neural network learns to fit a distribution to sample data....
Predicting conditional probability densities with neural networks requires complex (at least two-hid...
We consider the problem of learning density mixture models for Classification. Traditional learning ...
Neural networks (NNs) with random weights are an interesting alternative to conventional NNs that ar...
International audienceOptimal transport (OT) provides effective tools for comparing and mapping prob...
Generative Adversarial Networks (GANs) have gained significant attention in recent years, with impre...
In this paper we introduce an algorithm for learning hybrid Bayesian networks from data. The result ...