Supervised learning in general and regularized risk minimization in particular is about solving optimization problem which is jointly defined by a performance measure and a set of labeled training examples. The outcome of learning, a model, is then used mainly for predicting the labels for unlabeled examples in the testing environment. In real-world scenarios: a typical learning process often involves solving a sequence of similar problems with different parameters before a final model is identified. For learning to be successful, the final model must be produced timely, and the model should be robust to (mild) irregularities in the testing environment. The purpose of this thesis is to investigate ways to speed up the learning process and i...
The paper brings together methods from two disciplines: machine learning theory and robust statistic...
Various regularization techniques are investigated in supervised learning from data. Theoretical fea...
Supervised machine learning techniques developed in the Probably Approximately Correct, Maximum A Po...
We present a globally convergent method for regularized risk minimization prob-lems. Our method appl...
Abstract. Many machine learning algorithms lead to solving a convex regularized risk minimization pr...
Abstract. Discriminative methods for learning structured output classifiers have been gaining popula...
In this paper, we propose a computationally tractable and provably convergent algorithm for robust o...
We consider the problem of supervised learning with convex loss functions and propose a new form of ...
In learning problems, avoiding to overfit the training data is of fundamental importance in order to...
A wide variety of machine learning problems can be described as minimizing a regularized risk functi...
One of the main goal of Artificial Intelligence is to develop models capable of providing valuable p...
We establish risk bounds for Regularized Empirical Risk Minimizers (RERM) when the loss is Lipschitz...
We consider a general statistical learning problem where an unknown fraction of the training data is...
The rising amount of data has changed the classical approaches in statistical modeling significantly...
We develop two new approaches to robustness and learning in data-driven portfolio optimization, a pr...
The paper brings together methods from two disciplines: machine learning theory and robust statistic...
Various regularization techniques are investigated in supervised learning from data. Theoretical fea...
Supervised machine learning techniques developed in the Probably Approximately Correct, Maximum A Po...
We present a globally convergent method for regularized risk minimization prob-lems. Our method appl...
Abstract. Many machine learning algorithms lead to solving a convex regularized risk minimization pr...
Abstract. Discriminative methods for learning structured output classifiers have been gaining popula...
In this paper, we propose a computationally tractable and provably convergent algorithm for robust o...
We consider the problem of supervised learning with convex loss functions and propose a new form of ...
In learning problems, avoiding to overfit the training data is of fundamental importance in order to...
A wide variety of machine learning problems can be described as minimizing a regularized risk functi...
One of the main goal of Artificial Intelligence is to develop models capable of providing valuable p...
We establish risk bounds for Regularized Empirical Risk Minimizers (RERM) when the loss is Lipschitz...
We consider a general statistical learning problem where an unknown fraction of the training data is...
The rising amount of data has changed the classical approaches in statistical modeling significantly...
We develop two new approaches to robustness and learning in data-driven portfolio optimization, a pr...
The paper brings together methods from two disciplines: machine learning theory and robust statistic...
Various regularization techniques are investigated in supervised learning from data. Theoretical fea...
Supervised machine learning techniques developed in the Probably Approximately Correct, Maximum A Po...