78We consider the problem of predicting as well as the best linear combination of d given functions in least squares regression, and variants of this problem including constraints on the parameters of the linear combination. When the input distribution is known, there already exists an algorithm having an expected excess risk of order d/n, where n is the size of the training data. Without this strong assumption, standard results often contain a multiplicative log n factor, and require some additional assumptions like uniform boundedness of the d-dimensional input representation and exponential moments of the output. This work provides new risk bounds for the ridge estimator and the ordinary least squares estimator, and their variants. It al...
We provide a PAC-Bayesian bound for the expected loss of convex combinations of classifiers under a ...
Risk bounds, which are also called generalisation bounds in the statistical learning literature, are...
In this paper, we improve the PAC-Bayesian error bound for linear regression derived in Germain et a...
78We consider the problem of predicting as well as the best linear combination of d given functions ...
78We consider the problem of predicting as well as the best linear combination of d given functions ...
We consider the problem of predicting as well as the best linear combination of d given functions in...
We consider the problem of predicting as well as the best linear combination of d given functions in...
We consider the problem of predicting as well as the best linear combination of d given functions in...
78We consider the problem of predicting as well as the best linear combination of d given functions ...
We consider the problem of predicting as well as the best linear combination of d given functions in...
29 pagesInternational audienceWe consider the problem of robustly predicting as well as the best lin...
29 pagesInternational audienceWe consider the problem of robustly predicting as well as the best lin...
29 pagesInternational audienceWe consider the problem of robustly predicting as well as the best lin...
International audienceThe aim of this paper is to generalize the PAC-Bayesian theorems proved by Cat...
We analyze the expected risk of linear classifiers for a fixed weight vector in the “minimax” settin...
We provide a PAC-Bayesian bound for the expected loss of convex combinations of classifiers under a ...
Risk bounds, which are also called generalisation bounds in the statistical learning literature, are...
In this paper, we improve the PAC-Bayesian error bound for linear regression derived in Germain et a...
78We consider the problem of predicting as well as the best linear combination of d given functions ...
78We consider the problem of predicting as well as the best linear combination of d given functions ...
We consider the problem of predicting as well as the best linear combination of d given functions in...
We consider the problem of predicting as well as the best linear combination of d given functions in...
We consider the problem of predicting as well as the best linear combination of d given functions in...
78We consider the problem of predicting as well as the best linear combination of d given functions ...
We consider the problem of predicting as well as the best linear combination of d given functions in...
29 pagesInternational audienceWe consider the problem of robustly predicting as well as the best lin...
29 pagesInternational audienceWe consider the problem of robustly predicting as well as the best lin...
29 pagesInternational audienceWe consider the problem of robustly predicting as well as the best lin...
International audienceThe aim of this paper is to generalize the PAC-Bayesian theorems proved by Cat...
We analyze the expected risk of linear classifiers for a fixed weight vector in the “minimax” settin...
We provide a PAC-Bayesian bound for the expected loss of convex combinations of classifiers under a ...
Risk bounds, which are also called generalisation bounds in the statistical learning literature, are...
In this paper, we improve the PAC-Bayesian error bound for linear regression derived in Germain et a...