Low-rank approximation a b s t r a c t Advances of modern science and engineering lead to unprecedented amount of data for information processing. Of particular interest is the semi-supervised learning, where very few training samples are available among large volumes of unlabeled data. Graph-based algorithms using Laplacian regularization have achieved state-of-the-art performance, but can induce huge memory and computational costs. In this paper, we introduce L1-norm penalization on the low-rank factorized kernel for efficient, globally optimal model selection in graph-based semi-supervised learning. An important novelty is that our formulation can be transformed to a standard LASSO regression. On one hand, this makes it possible to emplo...
Sparse representation and low-rank approximation are fundamental tools in fields of signal processin...
Sparse learning models typically combine a smooth loss with a nonsmooth penalty, such as trace norm....
Editor: the editor This paper proposes a new robust regression interpretation of sparse penalties su...
© 2017, Science Press. All right reserved. Semi-supervised learning algorithm based on non-negative ...
We consider the problem of learning a sparse graph under Laplacian constrained Gaussian graphical mo...
Supervised learning over graphs is an intrinsically difficult problem: simultaneous learning of rele...
Abstract — When the amount of labeled data are limited, semi-supervised learning can improve the lea...
We consider the problem of learning a sparse undirected graph underlying a given set of multivariate...
This paper presents a novel noise-robust graph-based semi-supervised learning algorithm to deal with...
Regression with L1-regularization, Lasso, is a popular algorithm for recovering the sparsity pattern...
Nowadays, the explosive data scale increase provides an unprecedented opportunity to apply machine l...
Sparse machine learning has recently emerged as powerful tool to obtain models of high-dimensional d...
Kernelized LASSO (Least Absolute Selection and Shrinkage Operator) has been investigated in two sepa...
As an important pre-processing stage in many machine learning and pattern recognition domains, featu...
Regularized methods have been widely applied to system identification problems without known model s...
Sparse representation and low-rank approximation are fundamental tools in fields of signal processin...
Sparse learning models typically combine a smooth loss with a nonsmooth penalty, such as trace norm....
Editor: the editor This paper proposes a new robust regression interpretation of sparse penalties su...
© 2017, Science Press. All right reserved. Semi-supervised learning algorithm based on non-negative ...
We consider the problem of learning a sparse graph under Laplacian constrained Gaussian graphical mo...
Supervised learning over graphs is an intrinsically difficult problem: simultaneous learning of rele...
Abstract — When the amount of labeled data are limited, semi-supervised learning can improve the lea...
We consider the problem of learning a sparse undirected graph underlying a given set of multivariate...
This paper presents a novel noise-robust graph-based semi-supervised learning algorithm to deal with...
Regression with L1-regularization, Lasso, is a popular algorithm for recovering the sparsity pattern...
Nowadays, the explosive data scale increase provides an unprecedented opportunity to apply machine l...
Sparse machine learning has recently emerged as powerful tool to obtain models of high-dimensional d...
Kernelized LASSO (Least Absolute Selection and Shrinkage Operator) has been investigated in two sepa...
As an important pre-processing stage in many machine learning and pattern recognition domains, featu...
Regularized methods have been widely applied to system identification problems without known model s...
Sparse representation and low-rank approximation are fundamental tools in fields of signal processin...
Sparse learning models typically combine a smooth loss with a nonsmooth penalty, such as trace norm....
Editor: the editor This paper proposes a new robust regression interpretation of sparse penalties su...