Approximate Bayesian Gaussian process (GP) classification techniques are powerful nonparametric learning methods, similar in appearance and performance to support vector machines. Based on simple probabilistic models, they render interpretable results and can be embedded in Bayesian frameworks for model selection, feature selection, etc. In this paper, by applying the PAC-Bayesian theorem of McAllester (1999a), we prove distributionfree generalisation error bounds for a wide range of approximate Bayesian GP classification techniques. We also provide a new and much simplified proof for this powerful theorem, making use of the concept of convex duality which is a backbone of many machine learning techniques. We instantiate and test our bounds...
AbstractBayesian nonparametric models are widely and successfully used for statistical prediction. ...
Gaussian processes (GPs) provide a principled, practical, probabilistic approach to learning in kern...
We present new PAC-Bayesian generalisation bounds for learning problems with unbounded loss function...
Approximate Bayesian Gaussian process (GP) classification techniques are powerful nonparametric lear...
Non-parametric models and techniques enjoy a growing popularity in the field of machine learning, an...
Institute for Adaptive and Neural ComputationNon-parametric models and techniques enjoy a growing po...
Gaussian processes are attractive models for probabilistic classification but unfortunately exact in...
Due to their flexibility Gaussian processes are a well-known Bayesian framework for nonparametric fu...
In this work, we construct generalization bounds to understand existing learning algorithms and prop...
We present a competitive analysis of some non-parametric Bayesian algorithms in a worst-case online ...
The common method to understand and improve classification rules is to prove bounds on the generaliz...
I propose two new kernel-based models that enable an exact generative procedure: the Gaussian proces...
We present a competitive analysis of some non-parametric Bayesian algorithms in a worst-case online ...
The assessment of the reliability of systems which learn from data is a key issue to investigate tho...
This tutorial gives a concise overview of existing PAC-Bayesian theory focusing on three generalizat...
AbstractBayesian nonparametric models are widely and successfully used for statistical prediction. ...
Gaussian processes (GPs) provide a principled, practical, probabilistic approach to learning in kern...
We present new PAC-Bayesian generalisation bounds for learning problems with unbounded loss function...
Approximate Bayesian Gaussian process (GP) classification techniques are powerful nonparametric lear...
Non-parametric models and techniques enjoy a growing popularity in the field of machine learning, an...
Institute for Adaptive and Neural ComputationNon-parametric models and techniques enjoy a growing po...
Gaussian processes are attractive models for probabilistic classification but unfortunately exact in...
Due to their flexibility Gaussian processes are a well-known Bayesian framework for nonparametric fu...
In this work, we construct generalization bounds to understand existing learning algorithms and prop...
We present a competitive analysis of some non-parametric Bayesian algorithms in a worst-case online ...
The common method to understand and improve classification rules is to prove bounds on the generaliz...
I propose two new kernel-based models that enable an exact generative procedure: the Gaussian proces...
We present a competitive analysis of some non-parametric Bayesian algorithms in a worst-case online ...
The assessment of the reliability of systems which learn from data is a key issue to investigate tho...
This tutorial gives a concise overview of existing PAC-Bayesian theory focusing on three generalizat...
AbstractBayesian nonparametric models are widely and successfully used for statistical prediction. ...
Gaussian processes (GPs) provide a principled, practical, probabilistic approach to learning in kern...
We present new PAC-Bayesian generalisation bounds for learning problems with unbounded loss function...