This paper targets the problem of multi-task dense prediction which aims to achieve simultaneous learning and inference on a bunch of multiple dense prediction tasks in a single framework. A core objective in design is how to effectively model cross-task interactions to achieve a comprehensive improvement on different tasks based on their inherent complementarity and consistency. Existing works typically design extra expensive distillation modules to perform explicit interaction computations among different task-specific features in both training and inference, bringing difficulty in adaptation for different task sets, and reducing efficiency due to clearly increased size of multi-task models. In contrast, we introduce feature-wise contrast...
We present a method for learning a low-dimensional representation which is shared across a set of mu...
*This article is free to read on the publisher's website* In this paper we examine the problem of pr...
Multi-task learning (MTL) aims to improve generalization performance by learning multiple related ta...
Previous multi-task dense prediction studies developed complex pipelines such as multi-modal distill...
We investigate multi-task learning from an output space regularization perspective. Most multi-task ...
Multi-task learning solves multiple related learning problems simultaneously by sharing some common ...
In multi-task learning, multiple related tasks are considered simultaneously, with the goal to impro...
We address the problem of joint feature selection across a group of related classification or regres...
In multi-task learning, when the number of tasks is large, pairwise task relations exhibit sparse pa...
Convolution neural networks (CNNs) and Transformers have their own advantages and both have been wid...
10 figures, 6 tables, 23 pagesInternational audienceMulti-task learning has recently become a promis...
Despite the recent advances in multi-task learning of dense prediction problems, most methods rely o...
Learning discriminative task-specific features simultaneously for multiple distinct tasks is a funda...
Multi-task learning can extract the correlation of multiple related machine learning problems to imp...
Multi-task learning offers a way to benefit from synergy of multiple related prediction tasks via th...
We present a method for learning a low-dimensional representation which is shared across a set of mu...
*This article is free to read on the publisher's website* In this paper we examine the problem of pr...
Multi-task learning (MTL) aims to improve generalization performance by learning multiple related ta...
Previous multi-task dense prediction studies developed complex pipelines such as multi-modal distill...
We investigate multi-task learning from an output space regularization perspective. Most multi-task ...
Multi-task learning solves multiple related learning problems simultaneously by sharing some common ...
In multi-task learning, multiple related tasks are considered simultaneously, with the goal to impro...
We address the problem of joint feature selection across a group of related classification or regres...
In multi-task learning, when the number of tasks is large, pairwise task relations exhibit sparse pa...
Convolution neural networks (CNNs) and Transformers have their own advantages and both have been wid...
10 figures, 6 tables, 23 pagesInternational audienceMulti-task learning has recently become a promis...
Despite the recent advances in multi-task learning of dense prediction problems, most methods rely o...
Learning discriminative task-specific features simultaneously for multiple distinct tasks is a funda...
Multi-task learning can extract the correlation of multiple related machine learning problems to imp...
Multi-task learning offers a way to benefit from synergy of multiple related prediction tasks via th...
We present a method for learning a low-dimensional representation which is shared across a set of mu...
*This article is free to read on the publisher's website* In this paper we examine the problem of pr...
Multi-task learning (MTL) aims to improve generalization performance by learning multiple related ta...