We derive sharp bounds on the generalization error of a generic linear classifier trained by empirical risk minimization on randomly projected data. We make no restrictive assumptions (such as sparsity or separability) on the data: Instead we use the fact that, in a classification setting, the question of interest is really ‘what is the effect of random projection on the predicted class labels?’ and we therefore derive the exact probability of ‘label flipping’ under Gaussian random projection in order to quantify this effect precisely in our bounds
We prove risk bounds for halfspace learning when the data dimensionality is allowed to be larger tha...
A number of results have bounded generalization of a classifier in terms of its margin on the traini...
A number of results have bounded generalization error of a classifier in terms of its margin on the ...
We derive sharp bounds on the generalization error of a generic linear classifier trained by empiric...
Abstract. It has been recently shown that sharp generalization bounds can be obtained when the funct...
We derive new margin-based inequalities for the probability of error of classifiers. The main featur...
We prove theoretical guarantees for an averaging-ensemble of randomly projected Fisher linear discri...
We examine the performance of an ensemble of randomly-projected Fisher Linear Discriminant classifie...
Abstract. We prove theoretical guarantees for an averaging-ensemble of randomly projected Fisher Lin...
We study generalization properties of linear learning algorithms and develop a data dependent approa...
We study generalization properties of linear learning algorithms and develop a data dependent approa...
A number of results have bounded generalization of a classier in terms of its margin on the training...
We introduce a very general method for high dimensional classification, based on careful combination...
This thesis concerns the development and mathematical analysis of statistical procedures for classi...
We introduce a very general method for high-dimensional classification, based on careful combination...
We prove risk bounds for halfspace learning when the data dimensionality is allowed to be larger tha...
A number of results have bounded generalization of a classifier in terms of its margin on the traini...
A number of results have bounded generalization error of a classifier in terms of its margin on the ...
We derive sharp bounds on the generalization error of a generic linear classifier trained by empiric...
Abstract. It has been recently shown that sharp generalization bounds can be obtained when the funct...
We derive new margin-based inequalities for the probability of error of classifiers. The main featur...
We prove theoretical guarantees for an averaging-ensemble of randomly projected Fisher linear discri...
We examine the performance of an ensemble of randomly-projected Fisher Linear Discriminant classifie...
Abstract. We prove theoretical guarantees for an averaging-ensemble of randomly projected Fisher Lin...
We study generalization properties of linear learning algorithms and develop a data dependent approa...
We study generalization properties of linear learning algorithms and develop a data dependent approa...
A number of results have bounded generalization of a classier in terms of its margin on the training...
We introduce a very general method for high dimensional classification, based on careful combination...
This thesis concerns the development and mathematical analysis of statistical procedures for classi...
We introduce a very general method for high-dimensional classification, based on careful combination...
We prove risk bounds for halfspace learning when the data dimensionality is allowed to be larger tha...
A number of results have bounded generalization of a classifier in terms of its margin on the traini...
A number of results have bounded generalization error of a classifier in terms of its margin on the ...