Abstract—Sparse kernel methods are very efficient in solving regression and classification problems. The sparsity and perfor-mance of these methods depend on selecting an appropriate kernel function, which is typically achieved using a cross-validation pro-cedure. In this paper, we propose an incremental method for su-pervised learning, which is similar to the relevance vector machine (RVM) but also learns the parameters of the kernels during model training. Specifically, we learn different parameter values for each kernel, resulting in a very flexible model. In order to avoid over-fitting, we use a sparsity enforcing prior that controls the effective number of parameters of the model. We present experimental re-sults on artificial data to ...
Kernel learning methods, whether Bayesian or frequentist, typically involve mul-tiple levels of infe...
Regression tasks belong to the set of core problems faced in statistics and machine learning and pro...
Abstract—This paper adopts a Bayesian approach to simultaneously learn both an optimal nonlinear cla...
This paper introduces a general Bayesian framework for obtaining sparse solutions to re-gression and...
Kernelized LASSO (Least Absolute Selection and Shrinkage Operator) has been investigated in two sepa...
We present here a simple technique that simplifies the construction of Bayesian treatments of a vari...
In this paper, we present a simple mathematical trick that simplifies the derivation of Bayesian tre...
Recently, relevance vector machines (RVM) have been fashioned from a sparse Bayesian learning (SBL) ...
The Relevance Vector Machine (RVM) is a sparse approximate Bayesian kernel method. It provides full ...
In this paper we present a simple hierarchical Bayesian treatment of the sparse kernel logistic regr...
The Relevance Vector Machine (RVM) is a sparse approximate Bayesian kernel method. It provides full ...
We focus on a selection of kernel parameters in the framework of the relevance vector machine (RVM) ...
Traditional non-parametric statistical learning techniques are often computationally attractive, but...
This paper presents a variational Bayesian kernel selection (VBKS) algorithm for sparse Gaussian pro...
Abstract—In this work, a new fast variational sparse Bayesian learning (SBL) approach with automatic...
Kernel learning methods, whether Bayesian or frequentist, typically involve mul-tiple levels of infe...
Regression tasks belong to the set of core problems faced in statistics and machine learning and pro...
Abstract—This paper adopts a Bayesian approach to simultaneously learn both an optimal nonlinear cla...
This paper introduces a general Bayesian framework for obtaining sparse solutions to re-gression and...
Kernelized LASSO (Least Absolute Selection and Shrinkage Operator) has been investigated in two sepa...
We present here a simple technique that simplifies the construction of Bayesian treatments of a vari...
In this paper, we present a simple mathematical trick that simplifies the derivation of Bayesian tre...
Recently, relevance vector machines (RVM) have been fashioned from a sparse Bayesian learning (SBL) ...
The Relevance Vector Machine (RVM) is a sparse approximate Bayesian kernel method. It provides full ...
In this paper we present a simple hierarchical Bayesian treatment of the sparse kernel logistic regr...
The Relevance Vector Machine (RVM) is a sparse approximate Bayesian kernel method. It provides full ...
We focus on a selection of kernel parameters in the framework of the relevance vector machine (RVM) ...
Traditional non-parametric statistical learning techniques are often computationally attractive, but...
This paper presents a variational Bayesian kernel selection (VBKS) algorithm for sparse Gaussian pro...
Abstract—In this work, a new fast variational sparse Bayesian learning (SBL) approach with automatic...
Kernel learning methods, whether Bayesian or frequentist, typically involve mul-tiple levels of infe...
Regression tasks belong to the set of core problems faced in statistics and machine learning and pro...
Abstract—This paper adopts a Bayesian approach to simultaneously learn both an optimal nonlinear cla...