We present a new approach to bounding the true error rate of a continuous valued classifier based upon PAC-Bayes bounds. The method first con-structs a distribution over classifiers by determining how sensitive each parameter in the model is to noise. The true error rate of the stochastic classifier found with the sensitivity analysis can then be tightly bounded using a PAC-Bayes bound. In this paper we demonstrate the method on artificial neural networks with results of a order of magnitude im-provement vs. the best deterministic neural net bounds.
This paper presents an empirical study regarding training probabilistic neural networks using traini...
Conditional Value at Risk (CVAR) is a family of “coherent risk measures” which generalize the tradi...
In many applications of classifier learning, training data suffers from label noise. Deep networks a...
We present a new approach to bounding the true error rate of a continuous valued classifier based up...
We present a new approach to bounding the true error rate of a continuous valued classifier based up...
International audienceIn this paper we derive a PAC-Bayesian error bound for autonomous stochastic L...
PAC-Bayesian bounds are known to be tight and informative when studying the generalization ability o...
International audienceA learning method is self-certified if it uses all available data to simultane...
International audiencePAC-Bayesian bounds are known to be tight and informative when studying the ge...
We establish a disintegrated PAC-Bayesian bound, for classifiers that are trained via continuous-tim...
Recent studies have empirically investigated different methods to train stochastic neural networks o...
The authors present a class of efficient algorithms for PAC learning continuous functions and regres...
PAC-Bayes bounds have been proposed to get risk estimates based on a training sample. In this paper...
This paper presents a series of PAC error bounds for k-nearest neighbors classifiers, with O(n− r 2r...
We provide a PAC-Bayesian bound for the expected loss of convex combinations of classifiers under a ...
This paper presents an empirical study regarding training probabilistic neural networks using traini...
Conditional Value at Risk (CVAR) is a family of “coherent risk measures” which generalize the tradi...
In many applications of classifier learning, training data suffers from label noise. Deep networks a...
We present a new approach to bounding the true error rate of a continuous valued classifier based up...
We present a new approach to bounding the true error rate of a continuous valued classifier based up...
International audienceIn this paper we derive a PAC-Bayesian error bound for autonomous stochastic L...
PAC-Bayesian bounds are known to be tight and informative when studying the generalization ability o...
International audienceA learning method is self-certified if it uses all available data to simultane...
International audiencePAC-Bayesian bounds are known to be tight and informative when studying the ge...
We establish a disintegrated PAC-Bayesian bound, for classifiers that are trained via continuous-tim...
Recent studies have empirically investigated different methods to train stochastic neural networks o...
The authors present a class of efficient algorithms for PAC learning continuous functions and regres...
PAC-Bayes bounds have been proposed to get risk estimates based on a training sample. In this paper...
This paper presents a series of PAC error bounds for k-nearest neighbors classifiers, with O(n− r 2r...
We provide a PAC-Bayesian bound for the expected loss of convex combinations of classifiers under a ...
This paper presents an empirical study regarding training probabilistic neural networks using traini...
Conditional Value at Risk (CVAR) is a family of “coherent risk measures” which generalize the tradi...
In many applications of classifier learning, training data suffers from label noise. Deep networks a...