In this paper, we study the performance of extremum estimators from the perspective of generalization ability (GA): the ability of a model to predict outcomes in new samples from the same population. By adapting the classical concentration inequalities, we derive upper bounds on the empirical out-of-sample prediction errors as a function of the in-sample errors, in-sample data size, heaviness in the tails of the error distribution, and model complexity. We show that the error bounds may be used for tuning key estimation hyper-parameters, such as the number of folds K in cross-validation. We also show how K affects the bias-variance trade-off for cross-validation. We demonstrate that the L2-norm difference between penalized and the correspon...
We derive new margin-based inequalities for the probability of error of classifiers. The main featur...
In order to compare learning algorithms, experimental results reported in the machine learning liter...
We derive new margin-based inequalities for the probability of error of classifiers. The main featur...
In this paper, we study the performance of extremum estimators from the perspective of generalizatio...
In this paper, we study the generalization ability (GA)---the ability of a model to predict outcomes...
Model selection is difficult to analyse yet theoretically and empirically important, especially for ...
Model selection is difficult to analyse yet theoretically and empirically important, especially for ...
How can we select the best performing data-driven model and quantify its generalization error? This ...
How can we select the best performing data-driven model? How can we rigorously estimate its generali...
How can we select the best performing data-driven model? How can we rigorously estimate its generali...
How can we select the best performing data-driven model? How can we rigorously estimate its generali...
How can we select the best performing data-driven model? How can we rigorously estimate its generali...
We present a general approach to deriving bounds on the generalization error of randomized learning ...
This paper brings together methods from two different disciplines: statistics and machine learning. ...
We study model selection strategies based on penalized empirical loss minimization. We point out a...
We derive new margin-based inequalities for the probability of error of classifiers. The main featur...
In order to compare learning algorithms, experimental results reported in the machine learning liter...
We derive new margin-based inequalities for the probability of error of classifiers. The main featur...
In this paper, we study the performance of extremum estimators from the perspective of generalizatio...
In this paper, we study the generalization ability (GA)---the ability of a model to predict outcomes...
Model selection is difficult to analyse yet theoretically and empirically important, especially for ...
Model selection is difficult to analyse yet theoretically and empirically important, especially for ...
How can we select the best performing data-driven model and quantify its generalization error? This ...
How can we select the best performing data-driven model? How can we rigorously estimate its generali...
How can we select the best performing data-driven model? How can we rigorously estimate its generali...
How can we select the best performing data-driven model? How can we rigorously estimate its generali...
How can we select the best performing data-driven model? How can we rigorously estimate its generali...
We present a general approach to deriving bounds on the generalization error of randomized learning ...
This paper brings together methods from two different disciplines: statistics and machine learning. ...
We study model selection strategies based on penalized empirical loss minimization. We point out a...
We derive new margin-based inequalities for the probability of error of classifiers. The main featur...
In order to compare learning algorithms, experimental results reported in the machine learning liter...
We derive new margin-based inequalities for the probability of error of classifiers. The main featur...