We are interested in a framework of online learning with kernels for low-dimensional but large-scale and potentially adversarial datasets. We study the computational and theoretical performance of online variations of kernel Ridge regression. Despite its simplicity, the algorithm we study is the first to achieve the optimal regret for a wide range of kernels with a per-round complexity of order $n^\alpha$ with $\alpha < 2$. The algorithm we consider is based on approximating the kernel with the linear span of basis functions. Our contributions is two-fold: 1) For the Gaussian kernel, we propose to build the basis beforehand (independently of the data) through Taylor expansion. For $d$-dimensional inputs, we provide a (close to) optimal reg...
A wide range of statistical and machine learning problems involve learning one or multiple latent fu...
In this thesis, background theory about the online kernel-based algorithms and their use for online...
We present a generalization of the adversarial linear bandits framework, where the underlying losses...
Kernel methods are popular nonparametric modeling tools in machine learning. The Mercer kernel funct...
In this work, we present a new framework for large scale online kernel classification, making ker-ne...
One of the most challenging problems in kernel online learning is to bound the model size and to pro...
New optimization models and algorithms for online learning with kernels (OLK) in classification and ...
Kernel methods are popular and effective techniques for learn- ing on structured data, such as trees...
International audienceOnline kernel learning (OKL) is a flexible framework to approach prediction pr...
New optimization models and algorithms for online learning with kernels (OLK) in regression are prop...
In online learning with kernels, it is vital to control the size (budget) of the support set because...
International audienceWe consider the problem of online linear regression in the stochastic setting....
Large scale online kernel learning aims to build an efficient and scalable kernel-based predictive m...
In this paper, we improve the kernel alignment regret bound for online kernel learning in the regime...
International audienceWe consider supervised learning problems within the positive-definite kernel f...
A wide range of statistical and machine learning problems involve learning one or multiple latent fu...
In this thesis, background theory about the online kernel-based algorithms and their use for online...
We present a generalization of the adversarial linear bandits framework, where the underlying losses...
Kernel methods are popular nonparametric modeling tools in machine learning. The Mercer kernel funct...
In this work, we present a new framework for large scale online kernel classification, making ker-ne...
One of the most challenging problems in kernel online learning is to bound the model size and to pro...
New optimization models and algorithms for online learning with kernels (OLK) in classification and ...
Kernel methods are popular and effective techniques for learn- ing on structured data, such as trees...
International audienceOnline kernel learning (OKL) is a flexible framework to approach prediction pr...
New optimization models and algorithms for online learning with kernels (OLK) in regression are prop...
In online learning with kernels, it is vital to control the size (budget) of the support set because...
International audienceWe consider the problem of online linear regression in the stochastic setting....
Large scale online kernel learning aims to build an efficient and scalable kernel-based predictive m...
In this paper, we improve the kernel alignment regret bound for online kernel learning in the regime...
International audienceWe consider supervised learning problems within the positive-definite kernel f...
A wide range of statistical and machine learning problems involve learning one or multiple latent fu...
In this thesis, background theory about the online kernel-based algorithms and their use for online...
We present a generalization of the adversarial linear bandits framework, where the underlying losses...