World Wide Web has become an important knowl-edge source for many research fields, and quality of Web-acquired knowledge has direct impact on their performance. While evaluation of the vast amount of Web resources is out of question, in this paper we ex-amined thousands of sentences containing twelve pre-selected words and produced several quality measures including sentence coherence and sense distribution information. Our goal is to provide some insight to several Computational Linguistics areas that acquire knowledge from the Web
The paper compares systematically the utility of specially-made text corpora and the textual resourc...
Research in Natural Language Processing (NLP) has in recent years benefited from the enormous amount...
At the beginning of the first chapter the interdisciplinary setting between linguistics, corpus ling...
Abstract. In corpus-based lexicography and natural language processing fields some authors have prop...
Abstract. The 60-year-old dream of computational linguistics is to make computers capable of communi...
The emergence of Web 2.0 enables new insights in many research areas. In this study, we examine how ...
The Web is a very rich source of linguistic data, and in the last few years it has been used very in...
From the beginning of the twentieth century on, the use of the World Wide Web has become a current t...
An important problem in Natural Language Processing is identifying the correct sense of a word in a ...
Knowing the correct distribution of senses within a corpus can potentially boost the performance of ...
We investigate the potential of using the web as a huge corpus for language studies. We test the hyp...
The unavailability of very large corpora with semantically disambiguated words is a major limitation...
The web is a potentially useful corpus for language study because it provides examples of language t...
This paper presents a method of acquiring knowledge from the Web for noun sense disambiguation. Word...
We have built a corpus containing texts in 106 languages from texts available on the Internet and on...
The paper compares systematically the utility of specially-made text corpora and the textual resourc...
Research in Natural Language Processing (NLP) has in recent years benefited from the enormous amount...
At the beginning of the first chapter the interdisciplinary setting between linguistics, corpus ling...
Abstract. In corpus-based lexicography and natural language processing fields some authors have prop...
Abstract. The 60-year-old dream of computational linguistics is to make computers capable of communi...
The emergence of Web 2.0 enables new insights in many research areas. In this study, we examine how ...
The Web is a very rich source of linguistic data, and in the last few years it has been used very in...
From the beginning of the twentieth century on, the use of the World Wide Web has become a current t...
An important problem in Natural Language Processing is identifying the correct sense of a word in a ...
Knowing the correct distribution of senses within a corpus can potentially boost the performance of ...
We investigate the potential of using the web as a huge corpus for language studies. We test the hyp...
The unavailability of very large corpora with semantically disambiguated words is a major limitation...
The web is a potentially useful corpus for language study because it provides examples of language t...
This paper presents a method of acquiring knowledge from the Web for noun sense disambiguation. Word...
We have built a corpus containing texts in 106 languages from texts available on the Internet and on...
The paper compares systematically the utility of specially-made text corpora and the textual resourc...
Research in Natural Language Processing (NLP) has in recent years benefited from the enormous amount...
At the beginning of the first chapter the interdisciplinary setting between linguistics, corpus ling...