We present a method for learning a low-dimensional representation which is shared across a set of multiple related tasks. The method builds upon the well-known 1-norm regularization problem using a new regularizer which controls the number of learned features common for all the tasks. We show that this problem is equivalent to a convex optimization problem and develop an iterative algorithm for solving it. The algorithm has a simple interpretation: it alternately performs a supervised and an unsupervised step, where in the latter step we learn common-across-tasks representations and in the former step we learn task-specific functions using these representations. We report experiments on a simulated and a real data set which demonstrate that...
This paper proposes a novel algorithm, named Non-Convex Calibrated Multi-Task Learning (NC-CMTL), fo...
In multi-task learning, when the number of tasks is large, pairwise task relations exhibit sparse pa...
Multi-task learning solves multiple related learning problems simultaneously by sharing some common ...
We present a method for jointly learning r> 1 similar classification tasks. Our method builds upo...
Given several tasks, multi-task learning (MTL) learns multiple tasks jointly by exploring the interd...
Multi-task learning can extract the correlation of multiple related machine learning problems to imp...
This paper considers the multi-task learning problem and in the setting where some rele-vant feature...
We look at solving the task of Multitask Feature Learning by way of feature se-lection. We find that...
Multi-task sparse feature learning aims to improve the generalization performance by exploiting the ...
Regularization with matrix variables for multi-task learning Learning multiple tasks on a subspace ...
In multi-task learning, multiple related tasks are considered simultaneously, with the goal to impro...
International audienceA common approach in multi-task learning is to encourage the tasks to share a ...
International audienceA common approach in multi-task learning is to encourage the tasks to share a ...
We consider multi-task learning in the setting of multiple linear regression, and where some relevan...
We address the problem of joint feature selection across a group of related classification or regres...
This paper proposes a novel algorithm, named Non-Convex Calibrated Multi-Task Learning (NC-CMTL), fo...
In multi-task learning, when the number of tasks is large, pairwise task relations exhibit sparse pa...
Multi-task learning solves multiple related learning problems simultaneously by sharing some common ...
We present a method for jointly learning r> 1 similar classification tasks. Our method builds upo...
Given several tasks, multi-task learning (MTL) learns multiple tasks jointly by exploring the interd...
Multi-task learning can extract the correlation of multiple related machine learning problems to imp...
This paper considers the multi-task learning problem and in the setting where some rele-vant feature...
We look at solving the task of Multitask Feature Learning by way of feature se-lection. We find that...
Multi-task sparse feature learning aims to improve the generalization performance by exploiting the ...
Regularization with matrix variables for multi-task learning Learning multiple tasks on a subspace ...
In multi-task learning, multiple related tasks are considered simultaneously, with the goal to impro...
International audienceA common approach in multi-task learning is to encourage the tasks to share a ...
International audienceA common approach in multi-task learning is to encourage the tasks to share a ...
We consider multi-task learning in the setting of multiple linear regression, and where some relevan...
We address the problem of joint feature selection across a group of related classification or regres...
This paper proposes a novel algorithm, named Non-Convex Calibrated Multi-Task Learning (NC-CMTL), fo...
In multi-task learning, when the number of tasks is large, pairwise task relations exhibit sparse pa...
Multi-task learning solves multiple related learning problems simultaneously by sharing some common ...