In this paper we propose a Bayesian model for multi-task feature selection. This model is based on a generalized spike and slab sparse prior distribution that enforces the selection of a common subset of features across several tasks. Since exact Bayesian inference in this model is intractable, approximate inference is performed through expectation propagation (EP). EP approximates the posterior distribution of the model using a parametric probability distribution. This posterior approximation is particularly useful to identify relevant features for prediction. We focus on problems for which the number of features d is significantly larger than the number of instances for each task. We propose an efficient parametrization of the EP algorith...
This paper introduces a novel sparse Bayesian machine-learning algorithm for embedded feature select...
We discuss the expectation propagation (EP) algorithm for approximate Bayesian inference using a fac...
The problem of multi-task learning (MTL) is considered for sequential data, such as that typically m...
In this paper we propose a Bayesian model for multi-task feature selection. This model is based on a...
A probabilistic model based on the horseshoe prior is proposed for learning de-pendencies in the pro...
Multi-task feature selection methods often make the hypothesis that learning tasks share relevant an...
We describe a Bayesian method for group feature selection in linear regression problems. The method ...
Abstract—This paper adopts a Bayesian approach to simultaneously learn both an optimal nonlinear cla...
We address the problem of joint feature selection across a group of related classification or regres...
In many real-world classification problems the input contains a large number of potentially ir-relev...
For many real-world machine learning applications, labeled data is costly because the data labeling ...
Multi-task feature selection methods often make the hypothesis that learning tasks share relevant ...
In the context of statistical machine learning, sparse learning is a procedure that seeks a reconcil...
This paper introduces a novel sparse Bayesian machine-learning algorithm for embedded feature select...
In this work, we address the problem of solving a series of underdetermined linear inverse problembl...
This paper introduces a novel sparse Bayesian machine-learning algorithm for embedded feature select...
We discuss the expectation propagation (EP) algorithm for approximate Bayesian inference using a fac...
The problem of multi-task learning (MTL) is considered for sequential data, such as that typically m...
In this paper we propose a Bayesian model for multi-task feature selection. This model is based on a...
A probabilistic model based on the horseshoe prior is proposed for learning de-pendencies in the pro...
Multi-task feature selection methods often make the hypothesis that learning tasks share relevant an...
We describe a Bayesian method for group feature selection in linear regression problems. The method ...
Abstract—This paper adopts a Bayesian approach to simultaneously learn both an optimal nonlinear cla...
We address the problem of joint feature selection across a group of related classification or regres...
In many real-world classification problems the input contains a large number of potentially ir-relev...
For many real-world machine learning applications, labeled data is costly because the data labeling ...
Multi-task feature selection methods often make the hypothesis that learning tasks share relevant ...
In the context of statistical machine learning, sparse learning is a procedure that seeks a reconcil...
This paper introduces a novel sparse Bayesian machine-learning algorithm for embedded feature select...
In this work, we address the problem of solving a series of underdetermined linear inverse problembl...
This paper introduces a novel sparse Bayesian machine-learning algorithm for embedded feature select...
We discuss the expectation propagation (EP) algorithm for approximate Bayesian inference using a fac...
The problem of multi-task learning (MTL) is considered for sequential data, such as that typically m...