Kernelized LASSO (Least Absolute Selection and Shrinkage Operator) has been investigated in two separate recent papers (Gao et al., 2008) and (Wang et al., 2007). This paper is concerned with learning kernels under the LASSO formula- tion via adopting a generative Bayesian learning and inference approach. A new robust learning algorithm is proposed which produces a sparse kernel model with the capability of learning regularized parameters and kernel hyperparameters. A comparison with state-of-the-art methods for constructing sparse regression models such as the relevance vector machine (RVM) and the local regularization assisted orthogonal least squares regression (LROLS) is given. The new algorithm is also demonstrated to possess considera...
© Springer International Publishing AG 2017. Performing predictions using a non-linear support vecto...
AbstractA learning algorithm for regression is studied. It is a modified kernel projection machine (...
Lasso regression tends to assign zero weights to most irrelevant or redundant features, and hence is...
Kernelized LASSO (Least Absolute Selection and Shrinkage Operator) has been investigated in two sepa...
Abstract—Sparse kernel methods are very efficient in solving regression and classification problems....
We present here a simple technique that simplifies the construction of Bayesian treatments of a vari...
In this paper, we present a simple mathematical trick that simplifies the derivation of Bayesian tre...
© Springer-Verlag Berlin Heidelberg 2015. This chapter addresses the study of kernel methods, a clas...
This paper introduces a general Bayesian framework for obtaining sparse solutions to re-gression and...
Kernel selection is a central issue in kernel methods of machine learning. In this paper, we investi...
Regression with L1-regularization, Lasso, is a popular algorithm for recovering the sparsity pattern...
Low-rank approximation a b s t r a c t Advances of modern science and engineering lead to unpreceden...
In knowledge-based systems, besides obtaining good output prediction accuracy, it is crucial to unde...
A novel L1 significant vector (SV) regression algorithm is proposed in the paper. The proposed regul...
Editor: The popular Lasso approach for sparse estimation can be derived via marginalization of a joi...
© Springer International Publishing AG 2017. Performing predictions using a non-linear support vecto...
AbstractA learning algorithm for regression is studied. It is a modified kernel projection machine (...
Lasso regression tends to assign zero weights to most irrelevant or redundant features, and hence is...
Kernelized LASSO (Least Absolute Selection and Shrinkage Operator) has been investigated in two sepa...
Abstract—Sparse kernel methods are very efficient in solving regression and classification problems....
We present here a simple technique that simplifies the construction of Bayesian treatments of a vari...
In this paper, we present a simple mathematical trick that simplifies the derivation of Bayesian tre...
© Springer-Verlag Berlin Heidelberg 2015. This chapter addresses the study of kernel methods, a clas...
This paper introduces a general Bayesian framework for obtaining sparse solutions to re-gression and...
Kernel selection is a central issue in kernel methods of machine learning. In this paper, we investi...
Regression with L1-regularization, Lasso, is a popular algorithm for recovering the sparsity pattern...
Low-rank approximation a b s t r a c t Advances of modern science and engineering lead to unpreceden...
In knowledge-based systems, besides obtaining good output prediction accuracy, it is crucial to unde...
A novel L1 significant vector (SV) regression algorithm is proposed in the paper. The proposed regul...
Editor: The popular Lasso approach for sparse estimation can be derived via marginalization of a joi...
© Springer International Publishing AG 2017. Performing predictions using a non-linear support vecto...
AbstractA learning algorithm for regression is studied. It is a modified kernel projection machine (...
Lasso regression tends to assign zero weights to most irrelevant or redundant features, and hence is...