We show that convex KL-regularized objective functions are obtained from a PAC-Bayes risk bound when using convex loss functions for the stochastic Gibbs classifier that upper-bound the standard zero-one loss used for the weighted ma-jority vote. By restricting ourselves to a class of posteriors, that we call quasi uniform, we propose a simple coordinate descent learning algorithm to minimize the proposed KL-regularized cost function. We show that standard `p-regularized objective functions currently used, such as ridge regression and `p-regularized boosting, are obtained from a relaxation of the KL divergence between the quasi uniform posterior and the uniform prior. We present numerical experiments where the proposed learning algorithm ge...
In several supervised learning applications, it happens that reconstruction methods have to be appli...
PAC-Bayes bounds have been proposed to get risk estimates based on a training sample. In this paper...
A central problem in statistical learning is to design prediction algorithms that not only perform w...
We provide a PAC-Bayesian bound for the expected loss of convex combinations of classifiers under a ...
In the Bayesian reinforcement learning (RL) setting, a prior distribution over the unknown problem p...
Laplace random variables are commonly used to model extreme noise in many fields, while systems trai...
We provide a PAC-Bayesian bound for the expected loss of convex combinations of classifiers under a ...
We consider the problem of supervised learning with convex loss functions and propose a new form of ...
We propose new PAC-Bayes bounds for the risk of the weighted majority vote that depend on the mean a...
We present new PAC-Bayesian generalisation bounds for learning problems with unbounded loss function...
We establish risk bounds for Regularized Empirical Risk Minimizers (RERM) when the loss is Lipschitz...
In this work we develop efficient methods for learning random MAP predictors for structured label pr...
We give sharper bounds for uniformly stable randomized algorithms in a PAC-Bayesian framework, which...
In this paper, we improve the PAC-Bayesian error bound for linear regression derived in Germain et a...
The probability of error of classification methods based on convex combinations of simple base class...
In several supervised learning applications, it happens that reconstruction methods have to be appli...
PAC-Bayes bounds have been proposed to get risk estimates based on a training sample. In this paper...
A central problem in statistical learning is to design prediction algorithms that not only perform w...
We provide a PAC-Bayesian bound for the expected loss of convex combinations of classifiers under a ...
In the Bayesian reinforcement learning (RL) setting, a prior distribution over the unknown problem p...
Laplace random variables are commonly used to model extreme noise in many fields, while systems trai...
We provide a PAC-Bayesian bound for the expected loss of convex combinations of classifiers under a ...
We consider the problem of supervised learning with convex loss functions and propose a new form of ...
We propose new PAC-Bayes bounds for the risk of the weighted majority vote that depend on the mean a...
We present new PAC-Bayesian generalisation bounds for learning problems with unbounded loss function...
We establish risk bounds for Regularized Empirical Risk Minimizers (RERM) when the loss is Lipschitz...
In this work we develop efficient methods for learning random MAP predictors for structured label pr...
We give sharper bounds for uniformly stable randomized algorithms in a PAC-Bayesian framework, which...
In this paper, we improve the PAC-Bayesian error bound for linear regression derived in Germain et a...
The probability of error of classification methods based on convex combinations of simple base class...
In several supervised learning applications, it happens that reconstruction methods have to be appli...
PAC-Bayes bounds have been proposed to get risk estimates based on a training sample. In this paper...
A central problem in statistical learning is to design prediction algorithms that not only perform w...