In this paper we present BabelNet - a very large, wide-coverage multilingual semantic network. The resource is automatically constructed by means of a methodology that integrates lexicographic and encyclopedic knowledge from WordNet and Wikipedia. In addition Machine Translation is also applied to enrich the resource with lexical information for all languages. We conduct experiments on new and existing gold-standard datasets to show the high quality and coverage of the resource. © 2010 Association for Computational Linguistics
The use of wide coverage and general domain semantic resources has become a common practice and ofte...
We present a knowledge-rich approach to computing semantic relatedness which exploits the joint cont...
Abstract. Semantic processing is one of the most compelling and ambitious objectives in today’s Natu...
We present an automatic approach to the construction of BabelNet, a very large, wide-coverage multil...
We present an automatic approach to the construction of BabelNet, a very large, wide-coverage multil...
AbstractWe present an automatic approach to the construction of BabelNet, a very large, wide-coverag...
The intelligent manipulation of symbolic knowledge has been a long-sought goal of AI. However, when ...
Accurate semantic modeling lies at the very core of today’s Natural Language Processing (NLP). Getti...
Recent years have witnessed a surge in the amount of semantic information published on the Web. Inde...
Empowered by Semantic Web technologies and the recent Linked Data uptake, the publication of linguis...
Knowledge on word meanings and their relations across languages is vital for enabling semantic infor...
We present UWN, a large multilingual lexi-cal knowledge base that describes the mean-ings and relati...
Lexical databases are invaluable sources of knowledge about words and their meanings, with numerous ...
AbstractA knowledge base for real-world language processing applications should consist of a large b...
This paper describes the automatic creation of semantic networks from Wikipedia. Following Lipczak e...
The use of wide coverage and general domain semantic resources has become a common practice and ofte...
We present a knowledge-rich approach to computing semantic relatedness which exploits the joint cont...
Abstract. Semantic processing is one of the most compelling and ambitious objectives in today’s Natu...
We present an automatic approach to the construction of BabelNet, a very large, wide-coverage multil...
We present an automatic approach to the construction of BabelNet, a very large, wide-coverage multil...
AbstractWe present an automatic approach to the construction of BabelNet, a very large, wide-coverag...
The intelligent manipulation of symbolic knowledge has been a long-sought goal of AI. However, when ...
Accurate semantic modeling lies at the very core of today’s Natural Language Processing (NLP). Getti...
Recent years have witnessed a surge in the amount of semantic information published on the Web. Inde...
Empowered by Semantic Web technologies and the recent Linked Data uptake, the publication of linguis...
Knowledge on word meanings and their relations across languages is vital for enabling semantic infor...
We present UWN, a large multilingual lexi-cal knowledge base that describes the mean-ings and relati...
Lexical databases are invaluable sources of knowledge about words and their meanings, with numerous ...
AbstractA knowledge base for real-world language processing applications should consist of a large b...
This paper describes the automatic creation of semantic networks from Wikipedia. Following Lipczak e...
The use of wide coverage and general domain semantic resources has become a common practice and ofte...
We present a knowledge-rich approach to computing semantic relatedness which exploits the joint cont...
Abstract. Semantic processing is one of the most compelling and ambitious objectives in today’s Natu...