This paper describes the submission of UZH_CLyp for the SemEval 2023 Task 9 "Multilingual Tweet Intimacy Analysis". We achieved second-best results in all 10 languages according to the official Pearson's correlation regression evaluation measure. Our cross-lingual transfer learning approach explores the benefits of using a Head-First Fine-Tuning method (HeFiT) that first updates only the regression head parameters and then also updates the pre-trained transformer encoder parameters at a reduced learning rate. Additionally, we study the impact of using a small set of automatically generated examples (in our case, from ChatGPT) for low-resource settings where no human-labeled data is available. Our study shows that HeFiT stabilizes training a...
Supervised deep learning-based approaches have been applied to task-oriented dialog and have proven ...
Transfer learning from large language models (LLMs) has emerged as a powerful technique to enable kn...
The present study describes our submission to SemEval 2018 Task 1: Affect in Tweets. Our Spanish-onl...
In online domain-specific customer service applications, many companies struggle to deploy advanced ...
We present TwHIN-BERT, a multilingual language model trained on in-domain data from the popular soci...
Language models are ubiquitous in current NLP, and their multilingual capacity has recently attracte...
Pre-trained multilingual language models show significant performance gains for zero-shot cross-ling...
Platforms that feature user-generated content (social media, online forums, newspaper comment sectio...
Platforms that feature user-generated content (social media, online forums, newspaper comment sectio...
Cross-lingual transfer learning with large multilingual pre-trained models can be an effective appro...
The brittleness of finetuned language model performance on out-of-distribution (OOD) test samples in...
This paper introduces the joint submission of the Beijing Jiaotong University and WeChat AI to the W...
Platforms that feature user-generated content (social media, online forums, newspaper comment sectio...
This is an accepted manuscript of a paper published by ACM on 10/11/2021, available online: https://...
Fine-tuning the entire set of parameters of a large pretrained model has become the mainstream appro...
Supervised deep learning-based approaches have been applied to task-oriented dialog and have proven ...
Transfer learning from large language models (LLMs) has emerged as a powerful technique to enable kn...
The present study describes our submission to SemEval 2018 Task 1: Affect in Tweets. Our Spanish-onl...
In online domain-specific customer service applications, many companies struggle to deploy advanced ...
We present TwHIN-BERT, a multilingual language model trained on in-domain data from the popular soci...
Language models are ubiquitous in current NLP, and their multilingual capacity has recently attracte...
Pre-trained multilingual language models show significant performance gains for zero-shot cross-ling...
Platforms that feature user-generated content (social media, online forums, newspaper comment sectio...
Platforms that feature user-generated content (social media, online forums, newspaper comment sectio...
Cross-lingual transfer learning with large multilingual pre-trained models can be an effective appro...
The brittleness of finetuned language model performance on out-of-distribution (OOD) test samples in...
This paper introduces the joint submission of the Beijing Jiaotong University and WeChat AI to the W...
Platforms that feature user-generated content (social media, online forums, newspaper comment sectio...
This is an accepted manuscript of a paper published by ACM on 10/11/2021, available online: https://...
Fine-tuning the entire set of parameters of a large pretrained model has become the mainstream appro...
Supervised deep learning-based approaches have been applied to task-oriented dialog and have proven ...
Transfer learning from large language models (LLMs) has emerged as a powerful technique to enable kn...
The present study describes our submission to SemEval 2018 Task 1: Affect in Tweets. Our Spanish-onl...