Vector-valued learning, where the output space admits a vector-valued structure, is an important problem that covers a broad family of important domains, e.g. multi-task learning and transfer learning. Using local Rademacher complexity and unlabeled data, we derive novel semi-supervised excess risk bounds for general vector-valued learning from both kernel perspective and linear perspective. The derived bounds are much sharper than existing ones and the convergence rates are improved from the square root of labeled sample size to the square root of total sample size or directly dependent on labeled sample size. Motivated by our theoretical analysis, we propose a general semi-supervised algorithm for efficiently learning vector-valued functi...
International audienceWe derive an upper bound on the local Rademacher complexity of ℓp-norm multipl...
This paper provides a comprehensive error analysis of learning with vector-valued random features (R...
Previous works in literature showed that performance estimations of learning procedures can be chara...
Many fundamental machine learning tasks can be formulated as a problem of learning with vector-value...
Considering a single prediction task at a time is the most commonly paradigm in machine learning pra...
Considering a single prediction task at a time is the most commonly paradigm in machine learning pra...
We propose new bounds on the error of learning algorithms in terms of a data-dependent notion of com...
We propose new bounds on the error of learning algorithms in terms of a data-dependent notion of com...
We propose new bounds on the error of learning algorithms in terms of a data-dependent notion of com...
We propose new bounds on the error of learning algorithms in terms of a data-dependent notion of com...
© 1992-2012 IEEE. We analyze the local Rademacher complexity of empirical risk minimization-based mu...
We show a Talagrand-type concentration inequality for Multi-Task Learning (MTL), with which we estab...
We propose new bounds on the error of learning algorithms in terms of a data-dependent notion of com...
For certain families of multivariable vector-valued functions to be approximated, the accuracy of ap...
This paper presents a general vector-valued reproducing kernel Hilbert spaces (RKHS) framework for t...
International audienceWe derive an upper bound on the local Rademacher complexity of ℓp-norm multipl...
This paper provides a comprehensive error analysis of learning with vector-valued random features (R...
Previous works in literature showed that performance estimations of learning procedures can be chara...
Many fundamental machine learning tasks can be formulated as a problem of learning with vector-value...
Considering a single prediction task at a time is the most commonly paradigm in machine learning pra...
Considering a single prediction task at a time is the most commonly paradigm in machine learning pra...
We propose new bounds on the error of learning algorithms in terms of a data-dependent notion of com...
We propose new bounds on the error of learning algorithms in terms of a data-dependent notion of com...
We propose new bounds on the error of learning algorithms in terms of a data-dependent notion of com...
We propose new bounds on the error of learning algorithms in terms of a data-dependent notion of com...
© 1992-2012 IEEE. We analyze the local Rademacher complexity of empirical risk minimization-based mu...
We show a Talagrand-type concentration inequality for Multi-Task Learning (MTL), with which we estab...
We propose new bounds on the error of learning algorithms in terms of a data-dependent notion of com...
For certain families of multivariable vector-valued functions to be approximated, the accuracy of ap...
This paper presents a general vector-valued reproducing kernel Hilbert spaces (RKHS) framework for t...
International audienceWe derive an upper bound on the local Rademacher complexity of ℓp-norm multipl...
This paper provides a comprehensive error analysis of learning with vector-valued random features (R...
Previous works in literature showed that performance estimations of learning procedures can be chara...