Recent studies have empirically investigated different methods to train stochastic neural networks on a classification task by optimising a PAC-Bayesian bound via stochastic gradient descent. Most of these procedures need to replace the misclassification error with a surrogate loss, leading to a mismatch between the optimisation objective and the actual generalisation bound. The present paper proposes a novel training algorithm that optimises the PAC-Bayesian bound, without relying on any surrogate loss. Empirical results show that this approach outperforms currently available PAC-Bayesian training methods
We present a new approach to bounding the true error rate of a continuous valued classifier based up...
Deep Learning-based models are becoming more and more relevant for an increasing number of applicati...
© 2017 IEEE. Deep structured of Convolutional Neural Networks (CNN) has recently gained intense atte...
Recent studies have empirically investigated different methods to train stochastic neural networks o...
We present a new approach to bounding the true error rate of a continuous valued classifier based up...
We make three related contributions motivated by the challenge of training stochastic neural network...
This paper presents an empirical study regarding training probabilistic neural networks using traini...
The limit of infinite width allows for substantial simplifications in the analytical study of overpa...
This paper proposes an improved stochastic second order learning algorithm for supervised neural net...
PAC-Bayesian bounds are known to be tight and informative when studying the generalization ability o...
International audiencePAC-Bayesian bounds are known to be tight and informative when studying the ge...
Approximate Bayesian Gaussian process (GP) classification techniques are powerful nonparametric lear...
International audienceA learning method is self-certified if it uses all available data to simultane...
AbstractMost of the Bayesian network-based classifiers are usually only able to handle discrete vari...
We establish a disintegrated PAC-Bayesian bound, for classifiers that are trained via continuous-tim...
We present a new approach to bounding the true error rate of a continuous valued classifier based up...
Deep Learning-based models are becoming more and more relevant for an increasing number of applicati...
© 2017 IEEE. Deep structured of Convolutional Neural Networks (CNN) has recently gained intense atte...
Recent studies have empirically investigated different methods to train stochastic neural networks o...
We present a new approach to bounding the true error rate of a continuous valued classifier based up...
We make three related contributions motivated by the challenge of training stochastic neural network...
This paper presents an empirical study regarding training probabilistic neural networks using traini...
The limit of infinite width allows for substantial simplifications in the analytical study of overpa...
This paper proposes an improved stochastic second order learning algorithm for supervised neural net...
PAC-Bayesian bounds are known to be tight and informative when studying the generalization ability o...
International audiencePAC-Bayesian bounds are known to be tight and informative when studying the ge...
Approximate Bayesian Gaussian process (GP) classification techniques are powerful nonparametric lear...
International audienceA learning method is self-certified if it uses all available data to simultane...
AbstractMost of the Bayesian network-based classifiers are usually only able to handle discrete vari...
We establish a disintegrated PAC-Bayesian bound, for classifiers that are trained via continuous-tim...
We present a new approach to bounding the true error rate of a continuous valued classifier based up...
Deep Learning-based models are becoming more and more relevant for an increasing number of applicati...
© 2017 IEEE. Deep structured of Convolutional Neural Networks (CNN) has recently gained intense atte...