Multinomial Naive Bayes with Expectation Maximization (MNB-EM) is a standard semi-supervised learning method to augment Multinomial Naive Bayes (MNB) for text classification. Despite its success, MNB-EM is not stable, and may succeed or fail to improve MNB. We believe that this is because MNB-EM lacks the ability to preserve the class distribution on words. In this paper, we propose a novel method to augment MNB-EM by leveraging the word-level statistical constraint to preserve the class distribution on words. The word-level statistical constraints are further converted to constraints on document posteriors generated by MNB-EM. Experiments demonstrate that our method can consistently improve MNB-EM, and outperforms state-of-art baselines re...
We augment the naive Bayes model with an n-gram language model to address two shortcomings of naive ...
Recent approaches to text classification have used two different first-order probabilistic models fo...
Abstract—In this paper, we propose a new probabilistic model of naïve Bayes method which can be used...
Abstract. This paper presents empirical results for several versions of the multinomial naive Bayes ...
Due to its simplicity, efficiency, and effectiveness, multinomial naive Bayes (MNB) has been widely ...
This paper presents empirical results for several versions of the multinomial naive Bayes classifier...
Multinomial naive Bayes (MNB) is a popular method for document classification due to its computation...
We augment naive Bayes models with statistical n-gram language models to address short- comings of t...
This paper shows that the accuracy of learned text classifiers can be improved by augmenting a small...
There are numerous text documents available in electronic form. More and more are becoming available...
Naive Bayes is often used as a baseline in text classification because it is fast and easy to implem...
There are numerous text documents available in electronic form. More and more are becoming available...
The underlying assumption in traditional machine learning algorithms is that instances are Independe...
Abstract. This paper presents the method of significantly improving conventional Bayesian statistica...
Recent work in text classification has used two different first-order probabilistic models for class...
We augment the naive Bayes model with an n-gram language model to address two shortcomings of naive ...
Recent approaches to text classification have used two different first-order probabilistic models fo...
Abstract—In this paper, we propose a new probabilistic model of naïve Bayes method which can be used...
Abstract. This paper presents empirical results for several versions of the multinomial naive Bayes ...
Due to its simplicity, efficiency, and effectiveness, multinomial naive Bayes (MNB) has been widely ...
This paper presents empirical results for several versions of the multinomial naive Bayes classifier...
Multinomial naive Bayes (MNB) is a popular method for document classification due to its computation...
We augment naive Bayes models with statistical n-gram language models to address short- comings of t...
This paper shows that the accuracy of learned text classifiers can be improved by augmenting a small...
There are numerous text documents available in electronic form. More and more are becoming available...
Naive Bayes is often used as a baseline in text classification because it is fast and easy to implem...
There are numerous text documents available in electronic form. More and more are becoming available...
The underlying assumption in traditional machine learning algorithms is that instances are Independe...
Abstract. This paper presents the method of significantly improving conventional Bayesian statistica...
Recent work in text classification has used two different first-order probabilistic models for class...
We augment the naive Bayes model with an n-gram language model to address two shortcomings of naive ...
Recent approaches to text classification have used two different first-order probabilistic models fo...
Abstract—In this paper, we propose a new probabilistic model of naïve Bayes method which can be used...