Multi-task learning (MTL) allows deep neural networks to learn from related tasks by sharing parameters with other networks. In practice, however, MTL involves searching an enormous space of possible parameter sharing architectures to find (a) the layers or subspaces that benefit from sharing, (b) the appropriate amount of sharing, and (c) the appropriate relative weights of the different task losses. Recent work has addressed each of the above problems in isolation. In this work we present an approach that learns a latent multi-task architecture that jointly addresses (a)–(c). We present experiments on synthetic data and data from OntoNotes 5.0, including four different tasks and seven different domains. Our extension consistently outperfo...
Multi-task learning (MTL) is a learning paradigm involving the joint optimization of parameters with...
We propose a novel multi-task learning architecture, which allows learning of task-specific feature-...
International audienceThis work aims to contribute to our understanding of when multi-task learning ...
Multi-Task Learning is today an interesting and promising field which many mention as a must for ach...
Multi-Task Learning is today an interesting and promising field which many mention as a must for ach...
Multi-Task Learning is today an interesting and promising field which many mention as a must for ach...
In the context of multi-task learning, neural networks with branched architectures have often been e...
Most existing deep multi-task learning models are based on parameter sharing, such as hard sharing, ...
Multimedia applications often require concurrent solutions to multiple tasks. These tasks hold clues...
Given several tasks, multi-task learning (MTL) learns multiple tasks jointly by exploring the interd...
Machine learning applications, such as object detection and content recommendation, often require tr...
<p>In this figure, two related tasks are trained simultaneously using the network the architecture f...
One of the most salient and well-recognized features of human goal-directed behavior is our limited ...
Multi-task learning (MTL) is a learning paradigm involving the joint optimization of parameters with...
Multi-task learning (MTL) is a learning paradigm involving the joint optimization of parameters with...
Multi-task learning (MTL) is a learning paradigm involving the joint optimization of parameters with...
We propose a novel multi-task learning architecture, which allows learning of task-specific feature-...
International audienceThis work aims to contribute to our understanding of when multi-task learning ...
Multi-Task Learning is today an interesting and promising field which many mention as a must for ach...
Multi-Task Learning is today an interesting and promising field which many mention as a must for ach...
Multi-Task Learning is today an interesting and promising field which many mention as a must for ach...
In the context of multi-task learning, neural networks with branched architectures have often been e...
Most existing deep multi-task learning models are based on parameter sharing, such as hard sharing, ...
Multimedia applications often require concurrent solutions to multiple tasks. These tasks hold clues...
Given several tasks, multi-task learning (MTL) learns multiple tasks jointly by exploring the interd...
Machine learning applications, such as object detection and content recommendation, often require tr...
<p>In this figure, two related tasks are trained simultaneously using the network the architecture f...
One of the most salient and well-recognized features of human goal-directed behavior is our limited ...
Multi-task learning (MTL) is a learning paradigm involving the joint optimization of parameters with...
Multi-task learning (MTL) is a learning paradigm involving the joint optimization of parameters with...
Multi-task learning (MTL) is a learning paradigm involving the joint optimization of parameters with...
We propose a novel multi-task learning architecture, which allows learning of task-specific feature-...
International audienceThis work aims to contribute to our understanding of when multi-task learning ...