This paper presents the team BRUMS submission to SemEval-2020 Task 3: Graded Word Similarity in Context. The system utilises state-of-the-art contextualised word embeddings, which have some task-specific adaptations, including stacked embeddings and average embeddings. Overall, the approach achieves good evaluation scores across all the languages, while maintaining simplicity. Following the final rankings, our approach is ranked within the top 5 solutions of each language while preserving the 1st position of Finnish subtask 2
This paper describes the system that we submitted for SemEval-2018 task 10: capturing discriminative...
Most word embedding models typically represent each word using a single vector, which makes these mo...
Most word embedding models typically represent each word using a single vector, which makes these mo...
This paper presents the team BRUMS submission to SemEval-2020 Task 3: Graded Word Similarity in Cont...
This paper presents the Graded Word Similarity in Context (GWSC) task which asked participants to pr...
This paper presents the Graded Word Similarity in Context (GWSC) task which asked participants to pr...
We present the MULTISEM systems submitted to SemEval 2020 Task 3: Graded Word Similarity in Context ...
State of the art natural language processing tools are built on context-dependent word embeddings, b...
State of the art natural language processing tools are built on context-dependent word embeddings, b...
The dataset contains human similarity ratings for pairs of words. The annotators were presented with...
This paper describes the system that we submitted for SemEval-2018 task 10: Capturing discriminative...
Word embeddings are real-valued word representations capable of capturing lexical semantics and trai...
Calculating Semantic Textual Similarity (STS) plays a significant role in many applications such as ...
Distributed language representation has become the most widely used technique for language represent...
Distributed language representation has become the most widely used technique for language represent...
This paper describes the system that we submitted for SemEval-2018 task 10: capturing discriminative...
Most word embedding models typically represent each word using a single vector, which makes these mo...
Most word embedding models typically represent each word using a single vector, which makes these mo...
This paper presents the team BRUMS submission to SemEval-2020 Task 3: Graded Word Similarity in Cont...
This paper presents the Graded Word Similarity in Context (GWSC) task which asked participants to pr...
This paper presents the Graded Word Similarity in Context (GWSC) task which asked participants to pr...
We present the MULTISEM systems submitted to SemEval 2020 Task 3: Graded Word Similarity in Context ...
State of the art natural language processing tools are built on context-dependent word embeddings, b...
State of the art natural language processing tools are built on context-dependent word embeddings, b...
The dataset contains human similarity ratings for pairs of words. The annotators were presented with...
This paper describes the system that we submitted for SemEval-2018 task 10: Capturing discriminative...
Word embeddings are real-valued word representations capable of capturing lexical semantics and trai...
Calculating Semantic Textual Similarity (STS) plays a significant role in many applications such as ...
Distributed language representation has become the most widely used technique for language represent...
Distributed language representation has become the most widely used technique for language represent...
This paper describes the system that we submitted for SemEval-2018 task 10: capturing discriminative...
Most word embedding models typically represent each word using a single vector, which makes these mo...
Most word embedding models typically represent each word using a single vector, which makes these mo...