We investigate input-conditioned hypernetworks for multi-tasking in NLP, generating parameter-efficient adaptations for a decoder using a hypernetwork conditioned on the output of an encoder. This approach produces a unique decoder for every input instance, allowing the network a larger degree of flexibility than prior work that specializes the decoder for each task. We apply our method to sequence classification tasks, extractive QA, and summarisation and find that it surpasses previous parameter efficient fine-tuning methods and often outperforms fully finetuning the underlying model. An analysis of the embeddings used by our hypernetwork shows that they are sensitive to output label and type, suggesting that our approach better maps from...
In this work we address task interference in universal networks by considering that a network is tr...
Attention-based Encoder-Decoder has the effective architecture for neural machine translation (NMT),...
Neural networks have seen an explosion of usage and research in the past decade, particularly within...
Prompt-Tuning is a new paradigm for finetuning pre-trained language models in a parameter-efficient ...
State-of-the-art encoder-decoder models (e.g. for machine translation (MT) or speech recognition (AS...
In sequence-to-sequence tasks, sentences with heterogeneous semantics or grammatical structures may ...
In this paper, we frame homogeneous-feature multi-task learning (MTL) as a hierarchical representati...
Parameter-efficient fine-tuning (PEFT) has shown its effectiveness in adapting the pre-trained langu...
Distilling knowledge from a well-trained cumbersome network to a small one has recently become a new...
Massively multilingual models are promising for transfer learning across tasks and languages. Howeve...
Learning from structured data is a core machine learning task. Commonly, such data is represented as...
Adapting large-scale pretrained models to various downstream tasks via fine-tuning is a standard met...
Multilingual machine translation suffers from negative interference across languages. A common solut...
Recent developments in machine translation experiment with the idea that a model can improve the tra...
Selecting optimal parameters for a neural network architecture can often make the difference between...
In this work we address task interference in universal networks by considering that a network is tr...
Attention-based Encoder-Decoder has the effective architecture for neural machine translation (NMT),...
Neural networks have seen an explosion of usage and research in the past decade, particularly within...
Prompt-Tuning is a new paradigm for finetuning pre-trained language models in a parameter-efficient ...
State-of-the-art encoder-decoder models (e.g. for machine translation (MT) or speech recognition (AS...
In sequence-to-sequence tasks, sentences with heterogeneous semantics or grammatical structures may ...
In this paper, we frame homogeneous-feature multi-task learning (MTL) as a hierarchical representati...
Parameter-efficient fine-tuning (PEFT) has shown its effectiveness in adapting the pre-trained langu...
Distilling knowledge from a well-trained cumbersome network to a small one has recently become a new...
Massively multilingual models are promising for transfer learning across tasks and languages. Howeve...
Learning from structured data is a core machine learning task. Commonly, such data is represented as...
Adapting large-scale pretrained models to various downstream tasks via fine-tuning is a standard met...
Multilingual machine translation suffers from negative interference across languages. A common solut...
Recent developments in machine translation experiment with the idea that a model can improve the tra...
Selecting optimal parameters for a neural network architecture can often make the difference between...
In this work we address task interference in universal networks by considering that a network is tr...
Attention-based Encoder-Decoder has the effective architecture for neural machine translation (NMT),...
Neural networks have seen an explosion of usage and research in the past decade, particularly within...