In multi-task learning, multiple related tasks are considered simultaneously, with the goal to improve the generalization performance by utilizing the intrinsic sharing of information across tasks. This paper presents a multi-task learning approach by modeling the task-feature relationships. Specifically, instead of assuming that similar tasks have similar weights on all the features, we start with the motivation that the tasks should be related in terms of subsets of features, which implies a co-cluster structure. We design a novel regularization term to capture this task-feature co-cluster structure. A proximal algorithm is adopted to solve the optimization problem. Convincing experimental results demonstrate the effectiveness of the prop...
We address the problem of joint feature selection across a group of related classification or regres...
Editor: John Shawe-Taylor We study the problem of learning many related tasks simultaneously using k...
Multitask learning is a learning paradigm that seeks to improve the generalization performance of a ...
Given several tasks, multi-task learning (MTL) learns multiple tasks jointly by exploring the interd...
© 2012 IEEE. Multitask learning (MTL) aims to learn multiple tasks simultaneously through the interd...
Multi-task learning offers a way to benefit from synergy of multiple related prediction tasks via th...
Multi-task learning solves multiple related learning problems simultaneously by sharing some common ...
We present a method for learning a low-dimensional representation which is shared across a set of mu...
Multi-task learning can extract the correlation of multiple related machine learning problems to imp...
Multimedia applications often require concurrent solutions to multiple tasks. These tasks hold clues...
When faced with learning a set of inter-related tasks from a limited amount of usable data, learning...
Traditionally, multitask learning (MTL) assumes that all the tasks are related. This can lead to neg...
Multi-task learning is a learning paradigm that improves the performance of "related" task...
Abstract—Multi-task learning (MTL) methods have shown promising performance by learning multiple rel...
For many real-world machine learning applications, labeled data is costly because the data labeling ...
We address the problem of joint feature selection across a group of related classification or regres...
Editor: John Shawe-Taylor We study the problem of learning many related tasks simultaneously using k...
Multitask learning is a learning paradigm that seeks to improve the generalization performance of a ...
Given several tasks, multi-task learning (MTL) learns multiple tasks jointly by exploring the interd...
© 2012 IEEE. Multitask learning (MTL) aims to learn multiple tasks simultaneously through the interd...
Multi-task learning offers a way to benefit from synergy of multiple related prediction tasks via th...
Multi-task learning solves multiple related learning problems simultaneously by sharing some common ...
We present a method for learning a low-dimensional representation which is shared across a set of mu...
Multi-task learning can extract the correlation of multiple related machine learning problems to imp...
Multimedia applications often require concurrent solutions to multiple tasks. These tasks hold clues...
When faced with learning a set of inter-related tasks from a limited amount of usable data, learning...
Traditionally, multitask learning (MTL) assumes that all the tasks are related. This can lead to neg...
Multi-task learning is a learning paradigm that improves the performance of "related" task...
Abstract—Multi-task learning (MTL) methods have shown promising performance by learning multiple rel...
For many real-world machine learning applications, labeled data is costly because the data labeling ...
We address the problem of joint feature selection across a group of related classification or regres...
Editor: John Shawe-Taylor We study the problem of learning many related tasks simultaneously using k...
Multitask learning is a learning paradigm that seeks to improve the generalization performance of a ...