In this paper, we provide a straightforward proof of an important, but nevertheless little known, result obtained by Lindley in the framework of subjective probability theory. This result, once interpreted in the machine learning/pattern recognition context, puts new lights on the probabilistic interpretation of the output of a trained classifier. A learning machine, or more generally a model, is usually trained by minimizing a criterion the expectation of the cost function measuring the discrepancy between the model output and the desired output. In this letter, we first show that, for the binary classification case, training the model with any 'reasonable cost function' can lead to Bayesian a posteriori probability estimation. Indeed,...
In this paper we present an average-case analysis of the Bayesian classifier, a simple induction alg...
This work addresses the problem of estimating the optimal value function in a Markov Decision Proces...
Bayesian analysts use a formal model, Bayes’ theorem to learn from their data in contrast to non-Bay...
Naive Bayes is among the simplest probabilistic classifiers. It often performs surprisingly well in ...
It sometimes happens (for instance in case control studies) that a classifier is trained on a data s...
It sometimes happens (for instance in case control studies) that a classifier is trained on a data s...
We provide a decision theoretic approach to the construction of a learning process in the presence o...
Probabilistic modeling lets us infer, predict and make decisions based on incomplete or noisy data. ...
A general approach to Bayesian learning revisits some classical results, which study which functiona...
It sometimes happens, for instance in case-control studies, that a classifier is trained on a data ...
This tutorial text gives a unifying perspective on machine learning by covering both probabilistic a...
How can a machine learn from experience? Probabilistic modelling provides a framework for understand...
This paper argues that Bayesian probability theory is a general method for machine learning. From tw...
We compare two strategies for training connectionist (as well as non-connectionist) models for stati...
In this work we consider probabilistic approaches to sequential decision making. The ultimate goal i...
In this paper we present an average-case analysis of the Bayesian classifier, a simple induction alg...
This work addresses the problem of estimating the optimal value function in a Markov Decision Proces...
Bayesian analysts use a formal model, Bayes’ theorem to learn from their data in contrast to non-Bay...
Naive Bayes is among the simplest probabilistic classifiers. It often performs surprisingly well in ...
It sometimes happens (for instance in case control studies) that a classifier is trained on a data s...
It sometimes happens (for instance in case control studies) that a classifier is trained on a data s...
We provide a decision theoretic approach to the construction of a learning process in the presence o...
Probabilistic modeling lets us infer, predict and make decisions based on incomplete or noisy data. ...
A general approach to Bayesian learning revisits some classical results, which study which functiona...
It sometimes happens, for instance in case-control studies, that a classifier is trained on a data ...
This tutorial text gives a unifying perspective on machine learning by covering both probabilistic a...
How can a machine learn from experience? Probabilistic modelling provides a framework for understand...
This paper argues that Bayesian probability theory is a general method for machine learning. From tw...
We compare two strategies for training connectionist (as well as non-connectionist) models for stati...
In this work we consider probabilistic approaches to sequential decision making. The ultimate goal i...
In this paper we present an average-case analysis of the Bayesian classifier, a simple induction alg...
This work addresses the problem of estimating the optimal value function in a Markov Decision Proces...
Bayesian analysts use a formal model, Bayes’ theorem to learn from their data in contrast to non-Bay...