Word embeddings have developed into a major NLP tool with broad applicability. Understanding the semantic content of word embeddings remains an important challenge for additional applications. One aspect of this issue is to explore the interpretability of word embeddings. Sparse word embeddings have been proposed as models with improved interpretability. Continuing this line of research, we investigate the extent to which human interpretable semantic concepts emerge along the bases of sparse word representations. In order to have a broad framework for evaluation, we consider three general approaches for constructing sparse word representations, which are then evaluated in multiple ways. We propose a novel methodology to evaluate the semanti...
This paper presents an approach for investigating the nature of semantic information captured by wor...
In this paper, we propose an extension of the WordNet conceptual model, with the final purpose of en...
Continuous word representations have been remarkably useful across NLP tasks but remain poorly unde...
While word embeddings have proven to be highly useful in many NLP tasks, they are difficult to inter...
Word embeddings typically represent differ- ent meanings of a word in a single conflated vector. Emp...
Words are not detached individuals but part of a beautiful interconnected web of related concepts, a...
Word embeddings have been very successful in many natural language processing tasks, but they charac...
Word embeddings are vectorial semantic representations built with either counting or predicting tech...
Abstract Background In the past few years, neural word embeddings have been widely used in text mini...
Representing the semantics of words is a fundamental task in text processing. Several research studi...
Word embeddings are a key component of high-performing natural language processing (NLP) systems, bu...
In recent years it has become clear that data is the new resource of power and richness. The compani...
Real-valued word embeddings have transformed natural language processing (NLP) applications, recogni...
Word embeddings are a key component of high-performing natural language processing (NLP) systems, bu...
Copyright © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rig...
This paper presents an approach for investigating the nature of semantic information captured by wor...
In this paper, we propose an extension of the WordNet conceptual model, with the final purpose of en...
Continuous word representations have been remarkably useful across NLP tasks but remain poorly unde...
While word embeddings have proven to be highly useful in many NLP tasks, they are difficult to inter...
Word embeddings typically represent differ- ent meanings of a word in a single conflated vector. Emp...
Words are not detached individuals but part of a beautiful interconnected web of related concepts, a...
Word embeddings have been very successful in many natural language processing tasks, but they charac...
Word embeddings are vectorial semantic representations built with either counting or predicting tech...
Abstract Background In the past few years, neural word embeddings have been widely used in text mini...
Representing the semantics of words is a fundamental task in text processing. Several research studi...
Word embeddings are a key component of high-performing natural language processing (NLP) systems, bu...
In recent years it has become clear that data is the new resource of power and richness. The compani...
Real-valued word embeddings have transformed natural language processing (NLP) applications, recogni...
Word embeddings are a key component of high-performing natural language processing (NLP) systems, bu...
Copyright © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rig...
This paper presents an approach for investigating the nature of semantic information captured by wor...
In this paper, we propose an extension of the WordNet conceptual model, with the final purpose of en...
Continuous word representations have been remarkably useful across NLP tasks but remain poorly unde...