This paper introduces a new SemEval task on Cross-Level Semantic Similarity (CLSS), which measures the degree to which the meaning of a larger linguistic item, such as a paragraph, is captured by a smaller item, such as a sentence. High-quality data sets were constructed for four comparison types using multi-stage an-notation procedures with a graded scale of similarity. Nineteen teams submitted 38 systems. Most systems surpassed the baseline performance, with several attain-ing high performance for multiple com-parison types. Further, our results show that comparisons of semantic representa-tion increase performance beyond what is possible with text alone.
International audienceSemantic Textual Similarity (STS) measures the meaning similarity of sentences...
International Workshop on Semantic Evaluation (SemEval) is an on-going series of evaluations of NLP ...
Semantic similarity is an essential component of many Natural Language Processing applications. Howe...
This paper introduces a new SemEval task on Cross-Level Semantic Similarity (CLSS), which measures t...
Semantic similarity has typically been measured across items of approximately similar sizes. As a re...
Abstract Semantic similarity has typically been measured across items of approx-imately similar size...
Abstract Semantic similarity has typically been measured across items of approximately similar sizes...
Semantic Textual Similarity (STS) measures the degree of semantic equivalence between two texts. Thi...
We present in this paper our system developed for SemEval 2015 Shared Task 2 (2a - English Semantic ...
In this paper we describe the specifications and results of UMCC_DLSI system, which was involved in ...
Recently, the task of measuring seman- tic similarity between given texts has drawn much attention f...
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include...
This paper presents the system SSMT measuring the semantic similarity between a paragraph and a sent...
This paper provides system description of the cross-level semantic similarity task for the SEMEVAL-2...
In many natural language understanding applications, text processing requires comparing lexical unit...
International audienceSemantic Textual Similarity (STS) measures the meaning similarity of sentences...
International Workshop on Semantic Evaluation (SemEval) is an on-going series of evaluations of NLP ...
Semantic similarity is an essential component of many Natural Language Processing applications. Howe...
This paper introduces a new SemEval task on Cross-Level Semantic Similarity (CLSS), which measures t...
Semantic similarity has typically been measured across items of approximately similar sizes. As a re...
Abstract Semantic similarity has typically been measured across items of approx-imately similar size...
Abstract Semantic similarity has typically been measured across items of approximately similar sizes...
Semantic Textual Similarity (STS) measures the degree of semantic equivalence between two texts. Thi...
We present in this paper our system developed for SemEval 2015 Shared Task 2 (2a - English Semantic ...
In this paper we describe the specifications and results of UMCC_DLSI system, which was involved in ...
Recently, the task of measuring seman- tic similarity between given texts has drawn much attention f...
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include...
This paper presents the system SSMT measuring the semantic similarity between a paragraph and a sent...
This paper provides system description of the cross-level semantic similarity task for the SEMEVAL-2...
In many natural language understanding applications, text processing requires comparing lexical unit...
International audienceSemantic Textual Similarity (STS) measures the meaning similarity of sentences...
International Workshop on Semantic Evaluation (SemEval) is an on-going series of evaluations of NLP ...
Semantic similarity is an essential component of many Natural Language Processing applications. Howe...