We introduce an information theoretic criterion for Bayesian network structure learning which we call quotient normalized maximum likelihood (qNML). In contrast to the closely related factorized normalized maximum likelihood criterion, qNML satisfies the property of score equivalence. It is also decomposable and completely free of adjustable hyperparameters. For practical computations, we identify a remarkably accurate approximation proposed earlier by Szpankowski and Weinberger. Experiments on both simulated and real data demonstrate that the new criterion leads to parsimonious models with good predictive accuracy.Peer reviewe
The majority of real-world problems require addressing incomplete data. The use of the structural ex...
Learning optimal Bayesian networks (BN) from data is NP-hard in general. Nevertheless, certain BN cl...
To learn the network structures used in probabilistic models (e.g., Bayesian network), many research...
We introduce an information theoretic criterion for Bayesian network structure learning which we cal...
\u3cp\u3eThis paper addresses the problem of learning Bayesian network structures from data based on...
AbstractWe consider the problem of learning Bayesian network models in a non-informative setting, wh...
We introduce a Bayesian network classifier less restrictive than Naive Bayes (NB) and Tree Augmented...
Abstract. This work presents two new score functions based on the Bayesian Dirichlet equivalent unif...
This paper addresses exact learning of Bayesian network structure from data based on the Bayesian Di...
This paper addresses exact learning of Bayesian network structure from data based on the Bayesian Di...
\u3cp\u3eThis work presents two new score functions based on the Bayesian Dirichlet equivalent unifo...
Bayesian networks learned from data and background knowledge have been broadly used to reason under ...
We study online learning under logarithmic loss with regular parametric models. In this setting, eac...
AbstractThe motivation for the paper is the geometric approach to learning Bayesian network (BN) str...
The bnclassify package provides state-of-the art algorithms for learning Bayesian network classifier...
The majority of real-world problems require addressing incomplete data. The use of the structural ex...
Learning optimal Bayesian networks (BN) from data is NP-hard in general. Nevertheless, certain BN cl...
To learn the network structures used in probabilistic models (e.g., Bayesian network), many research...
We introduce an information theoretic criterion for Bayesian network structure learning which we cal...
\u3cp\u3eThis paper addresses the problem of learning Bayesian network structures from data based on...
AbstractWe consider the problem of learning Bayesian network models in a non-informative setting, wh...
We introduce a Bayesian network classifier less restrictive than Naive Bayes (NB) and Tree Augmented...
Abstract. This work presents two new score functions based on the Bayesian Dirichlet equivalent unif...
This paper addresses exact learning of Bayesian network structure from data based on the Bayesian Di...
This paper addresses exact learning of Bayesian network structure from data based on the Bayesian Di...
\u3cp\u3eThis work presents two new score functions based on the Bayesian Dirichlet equivalent unifo...
Bayesian networks learned from data and background knowledge have been broadly used to reason under ...
We study online learning under logarithmic loss with regular parametric models. In this setting, eac...
AbstractThe motivation for the paper is the geometric approach to learning Bayesian network (BN) str...
The bnclassify package provides state-of-the art algorithms for learning Bayesian network classifier...
The majority of real-world problems require addressing incomplete data. The use of the structural ex...
Learning optimal Bayesian networks (BN) from data is NP-hard in general. Nevertheless, certain BN cl...
To learn the network structures used in probabilistic models (e.g., Bayesian network), many research...