Recent work has found that multi-task training with a large number of diverse tasks can uniformly improve downstream performance on unseen target tasks. In contrast, literature on task transferability has established that the choice of intermediate tasks can heavily affect downstream task performance. In this work, we aim to disentangle the effect of scale and relatedness of tasks in multi-task representation learning. We find that, on average, increasing the scale of multi-task learning, in terms of the number of tasks, indeed results in better learned representations than smaller multi-task setups. However, if the target tasks are known ahead of time, then training on a smaller set of related tasks is competitive to the large-scale multi-...
Recently, it has been observed that a transfer learning solution might be all we need to solve many ...
Transfer learning approaches have shown to significantly improve performance on downstream tasks. Ho...
Model sparsification in deep learning promotes simpler, more interpretable models with fewer paramet...
Training a single model on multiple input domains and/or output tasks allows for compressing informa...
While multi-task learning (MTL) has gained significant attention in recent years, its underlying mec...
Recent multi-task learning research argues against unitary scalarization, where training simply mini...
Curriculum learning strategies in prior multi-task learning approaches arrange datasets in a difficu...
We investigate the sample complexity of learning the optimal arm for multi-task bandit problems. Arm...
Multi-Task Learning is today an interesting and promising field which many mention as a must for ach...
In this paper, we consider the framework of multi-task representation (MTR) learning where the goal ...
One of the most salient and well-recognized features of human goal-directed behavior is our limited ...
Recent research has proposed a series of specialized optimization algorithms for deep multi-task mod...
Multi-Task Learning (MTL) is a widely-used and powerful learning paradigm for training deep neural n...
In this paper, we frame homogeneous-feature multi-task learning (MTL) as a hierarchical representati...
Multitask learning (MTL) has achieved remarkable success in numerous domains, such as healthcare, co...
Recently, it has been observed that a transfer learning solution might be all we need to solve many ...
Transfer learning approaches have shown to significantly improve performance on downstream tasks. Ho...
Model sparsification in deep learning promotes simpler, more interpretable models with fewer paramet...
Training a single model on multiple input domains and/or output tasks allows for compressing informa...
While multi-task learning (MTL) has gained significant attention in recent years, its underlying mec...
Recent multi-task learning research argues against unitary scalarization, where training simply mini...
Curriculum learning strategies in prior multi-task learning approaches arrange datasets in a difficu...
We investigate the sample complexity of learning the optimal arm for multi-task bandit problems. Arm...
Multi-Task Learning is today an interesting and promising field which many mention as a must for ach...
In this paper, we consider the framework of multi-task representation (MTR) learning where the goal ...
One of the most salient and well-recognized features of human goal-directed behavior is our limited ...
Recent research has proposed a series of specialized optimization algorithms for deep multi-task mod...
Multi-Task Learning (MTL) is a widely-used and powerful learning paradigm for training deep neural n...
In this paper, we frame homogeneous-feature multi-task learning (MTL) as a hierarchical representati...
Multitask learning (MTL) has achieved remarkable success in numerous domains, such as healthcare, co...
Recently, it has been observed that a transfer learning solution might be all we need to solve many ...
Transfer learning approaches have shown to significantly improve performance on downstream tasks. Ho...
Model sparsification in deep learning promotes simpler, more interpretable models with fewer paramet...