Choosing the most suitable classifier in a linguistic context is a well-known problem in the production of Mandarin and many other languages. The present paper proposes a solution based on BERT, compares this solution to previous neural and rule-based models, and argues that the BERT model performs particularly well on those difficult cases where the classifier adds information to the text
Recently, transformer-based pretrained language models have demonstrated stellar performance in natu...
In Chinese, when objects are named with their quantity, a numeral classifier must be inserted betwee...
In the early named entity recognition models, most text processing focused only on the representatio...
Choosing the most suitable classifier in a linguistic context is a well-known problem in the product...
This research, titled Classifier in Mandarin, aims to describe classifier in Mandarin from the aspec...
The bigram language models are popular, in much language processing applications, in both Indo-Europ...
Language identification is the task of automatically determining the identity of a language conveyed...
Nowadays, most deep learning models ignore Chinese habits and global information when processing Chi...
Neural network models such as Transformer-based BERT, mBERT and RoBERTa are achieving impressive per...
Chinese classifiers are found to be a category that creates true challenges for second language lear...
The article is an essay on the development of technologies for natural language processing, which fo...
It has been shown through a number of experiments that neural networks can be used for a phonetic ty...
International audienceWe provide a functional analysis of the unconventional sortal classifier syste...
This thesis promotes an interface inquiry into how classifiers are parameterized in Chinese, which i...
The materials and data of the experiments which investigate the semantic access of Mandarin classifi...
Recently, transformer-based pretrained language models have demonstrated stellar performance in natu...
In Chinese, when objects are named with their quantity, a numeral classifier must be inserted betwee...
In the early named entity recognition models, most text processing focused only on the representatio...
Choosing the most suitable classifier in a linguistic context is a well-known problem in the product...
This research, titled Classifier in Mandarin, aims to describe classifier in Mandarin from the aspec...
The bigram language models are popular, in much language processing applications, in both Indo-Europ...
Language identification is the task of automatically determining the identity of a language conveyed...
Nowadays, most deep learning models ignore Chinese habits and global information when processing Chi...
Neural network models such as Transformer-based BERT, mBERT and RoBERTa are achieving impressive per...
Chinese classifiers are found to be a category that creates true challenges for second language lear...
The article is an essay on the development of technologies for natural language processing, which fo...
It has been shown through a number of experiments that neural networks can be used for a phonetic ty...
International audienceWe provide a functional analysis of the unconventional sortal classifier syste...
This thesis promotes an interface inquiry into how classifiers are parameterized in Chinese, which i...
The materials and data of the experiments which investigate the semantic access of Mandarin classifi...
Recently, transformer-based pretrained language models have demonstrated stellar performance in natu...
In Chinese, when objects are named with their quantity, a numeral classifier must be inserted betwee...
In the early named entity recognition models, most text processing focused only on the representatio...