We present a new approach to bounding the true error rate of a continuous valued classifier based upon PAC-Bayes bounds. The method first constructs a distribution over classifiers by determining how sensitive each parameter in the model is to noise. The true error rate of the stochastic classifier found with the sensitivity analysis can then be tightly bounded using a PAC-Bayes bound. In this paper we demonstrate the method on artificial neural networks with results of a ¢¡¤ £ order of magnitude improvement vs. the best deterministic neural net bounds.
Methods to certify the robustness of neural networks in the presence of input uncertainty are vital ...
We prove bounds for the approximation and estimation of certain binary classification functions usin...
Conditional Value at Risk (CVAR) is a family of “coherent risk measures” which generalize the tradi...
We present a new approach to bounding the true error rate of a continuous valued classifier based up...
PAC-Bayesian bounds are known to be tight and informative when studying the generalization ability o...
International audiencePAC-Bayesian bounds are known to be tight and informative when studying the ge...
International audienceIn this paper we derive a PAC-Bayesian error bound for autonomous stochastic L...
International audienceA learning method is self-certified if it uses all available data to simultane...
Recent studies have empirically investigated different methods to train stochastic neural networks o...
We establish a disintegrated PAC-Bayesian bound, for classifiers that are trained via continuous-tim...
The authors present a class of efficient algorithms for PAC learning continuous functions and regres...
This paper presents an empirical study regarding training probabilistic neural networks using traini...
PAC-Bayes bounds have been proposed to get risk estimates based on a training sample. In this paper...
We provide a PAC-Bayesian bound for the expected loss of convex combinations of classifiers under a ...
We provide a PAC-Bayesian bound for the expected loss of convex combinations of classifiers under a ...
Methods to certify the robustness of neural networks in the presence of input uncertainty are vital ...
We prove bounds for the approximation and estimation of certain binary classification functions usin...
Conditional Value at Risk (CVAR) is a family of “coherent risk measures” which generalize the tradi...
We present a new approach to bounding the true error rate of a continuous valued classifier based up...
PAC-Bayesian bounds are known to be tight and informative when studying the generalization ability o...
International audiencePAC-Bayesian bounds are known to be tight and informative when studying the ge...
International audienceIn this paper we derive a PAC-Bayesian error bound for autonomous stochastic L...
International audienceA learning method is self-certified if it uses all available data to simultane...
Recent studies have empirically investigated different methods to train stochastic neural networks o...
We establish a disintegrated PAC-Bayesian bound, for classifiers that are trained via continuous-tim...
The authors present a class of efficient algorithms for PAC learning continuous functions and regres...
This paper presents an empirical study regarding training probabilistic neural networks using traini...
PAC-Bayes bounds have been proposed to get risk estimates based on a training sample. In this paper...
We provide a PAC-Bayesian bound for the expected loss of convex combinations of classifiers under a ...
We provide a PAC-Bayesian bound for the expected loss of convex combinations of classifiers under a ...
Methods to certify the robustness of neural networks in the presence of input uncertainty are vital ...
We prove bounds for the approximation and estimation of certain binary classification functions usin...
Conditional Value at Risk (CVAR) is a family of “coherent risk measures” which generalize the tradi...