In the Chinese language, words consist of characters each of which is composed of one or more components. Almost every individual Chinese character has a specific meaning, and the meaning of a word is usually highly related to the characters that comprise it. Likewise, sub-character components often make a predictable contribution to the meaning of a character, and in general characters that have the same components have similar or related meanings. It is easy to automatically decompose words into characters and their components. In this paper, we improve on a corpus-based approach to computing word similarity in Chinese by extending it according to the characters and components shared between words. In an evaluation on 30, 000 word types (...
Most word embedding methods take a word as a ba-sic unit and learn embeddings according to words’ ex...
A Chinese character is made in China and crossed to Japan through the Korean Peninsula. It is though...
Information about students ’ mistakes opens a window to an understanding of their learning processes...
In the Chinese language, words consist of characters each of which is composed of one or more compon...
Distributional Similarity has attracted considerable attention in the field of natural language proc...
Distributional Similarity has attracted considerable attention in the field of natural language proc...
In this paper we propose a novel word representation for Chinese based on a state-of-the-art word em...
So far, most Chinese natural language processing neglects the punctuations or oversimplifies their f...
Automatically detecting similar Chinese characters is useful in many areas, such as building intelli...
Automatically identifying Chinese characters that are similar in their glyph, pronunciations and mea...
Word similarity computation is a fundamental task for natural language processing.We organize a sema...
Word similarity computation is a fundamental task for natural language processing. We organize a sem...
We propose cw2vec, a novel method for learning Chinese word embeddings. It is based on our observati...
Collocation extraction systems based on pure statistical methods suffer from two major problems. The...
Distributed word representations are very useful for capturing semantic information and have been su...
Most word embedding methods take a word as a ba-sic unit and learn embeddings according to words’ ex...
A Chinese character is made in China and crossed to Japan through the Korean Peninsula. It is though...
Information about students ’ mistakes opens a window to an understanding of their learning processes...
In the Chinese language, words consist of characters each of which is composed of one or more compon...
Distributional Similarity has attracted considerable attention in the field of natural language proc...
Distributional Similarity has attracted considerable attention in the field of natural language proc...
In this paper we propose a novel word representation for Chinese based on a state-of-the-art word em...
So far, most Chinese natural language processing neglects the punctuations or oversimplifies their f...
Automatically detecting similar Chinese characters is useful in many areas, such as building intelli...
Automatically identifying Chinese characters that are similar in their glyph, pronunciations and mea...
Word similarity computation is a fundamental task for natural language processing.We organize a sema...
Word similarity computation is a fundamental task for natural language processing. We organize a sem...
We propose cw2vec, a novel method for learning Chinese word embeddings. It is based on our observati...
Collocation extraction systems based on pure statistical methods suffer from two major problems. The...
Distributed word representations are very useful for capturing semantic information and have been su...
Most word embedding methods take a word as a ba-sic unit and learn embeddings according to words’ ex...
A Chinese character is made in China and crossed to Japan through the Korean Peninsula. It is though...
Information about students ’ mistakes opens a window to an understanding of their learning processes...