Abstract. We consider the problem of determining a model for a given system on the basis of experimental data. The amount of data available is limited and, further, may be corrupted by noise. In this situation, it is important to control the complexity of the class of models from which we are to choose our model. In this paper, we first give a simplified overview of the principal features of learning theory. Then we describe how the method of regularization is used to control complexity in learning. We discuss two examples of regularization, one in which the function space used is finite dimensional, and another in which it is a reproducing kernel Hilbert space. Our exposition follows the formulation of Cucker and Smale. We give a new metho...
AbstractA standard assumption in theoretical study of learning algorithms for regression is uniform ...
Under mild assumptions on the kernel, we obtain the best known error rates in a regularized learning...
We present a new approach for learning programs from noisy datasets. Our approach is based on two ne...
This dissertation is about learning representations of functions while restricting complexity. In ma...
Despite the recent widespread success of machine learning, we still do not fully understand its fund...
NOTE: Text or symbols not renderable in plain ASCII are indicated by [...]. Abstract is included in ...
We define notions of stability for learning algorithms and show how to use these notions to derive g...
Various regularization techniques are investigated in supervised learning from data. Theoretical fea...
Learning from data under constraints on model complexity is studied in terms of rates of approximate...
We define notions of stability for learning algorithms and show how to use these notions to derive g...
We define notions of stability for learning algorithms and show how to use these notions to derive g...
We propose new bounds on the error of learning algorithms in terms of a data-dependent notion of com...
Supervised learning from data is investigated from an optimization viewpoint. Ill-posedness issues o...
Supervised learning from data is investigated from an optimization viewpoint. Ill-posedness issues o...
Learning from data under constraints on model complexity is studied in terms of rates of approximate...
AbstractA standard assumption in theoretical study of learning algorithms for regression is uniform ...
Under mild assumptions on the kernel, we obtain the best known error rates in a regularized learning...
We present a new approach for learning programs from noisy datasets. Our approach is based on two ne...
This dissertation is about learning representations of functions while restricting complexity. In ma...
Despite the recent widespread success of machine learning, we still do not fully understand its fund...
NOTE: Text or symbols not renderable in plain ASCII are indicated by [...]. Abstract is included in ...
We define notions of stability for learning algorithms and show how to use these notions to derive g...
Various regularization techniques are investigated in supervised learning from data. Theoretical fea...
Learning from data under constraints on model complexity is studied in terms of rates of approximate...
We define notions of stability for learning algorithms and show how to use these notions to derive g...
We define notions of stability for learning algorithms and show how to use these notions to derive g...
We propose new bounds on the error of learning algorithms in terms of a data-dependent notion of com...
Supervised learning from data is investigated from an optimization viewpoint. Ill-posedness issues o...
Supervised learning from data is investigated from an optimization viewpoint. Ill-posedness issues o...
Learning from data under constraints on model complexity is studied in terms of rates of approximate...
AbstractA standard assumption in theoretical study of learning algorithms for regression is uniform ...
Under mild assumptions on the kernel, we obtain the best known error rates in a regularized learning...
We present a new approach for learning programs from noisy datasets. Our approach is based on two ne...