This paper addresses the estimation of parameters of a Bayesian network from incomplete data. The task is usually tackled by running the Expectation-Maximization (EM) algorithm several times in order to obtain a high log-likelihood estimate. We argue that choosing the maximum log-likelihood estimate (as well as the maximum penalized log-likelihood and the maximum a posteriori estimate) has severe drawbacks, being affected both by overfitting and model uncertainty. Two ideas are discussed to overcome these issues: a maximum entropy approach and a Bayesian model averaging approach. Both ideas can be easily applied on top of EM, while the entropy idea can be also implemented in a more sophisticated way, through a dedicated non-linear solver. A...
We propose a Bayesian approach to learning Bayesian network models from incomplete data. The objec...
The majority of real-world problems require addressing incomplete data. The use of the structural ex...
Bayesian networks with mixtures of truncated exponentials (MTEs) are gaining popularity as a flexibl...
\u3cp\u3eThis chapter addresses the problem of estimating the parameters of a Bayesian network from ...
We propose an efficient family of algorithms to learn the parameters of a Bayesian network from inco...
We propose an efficient family of algorithms to learn the parameters of a Bayesian network from inco...
We propose a family of efficient algorithms for learning the parameters of a Bayesian network from i...
We investigate methods for parameter learning from incomplete data that is not missing at random. Li...
The creation of Bayesian networks often requires the specification of a large number of parameters, ...
\u3cp\u3eThis paper describes an Imprecise Dirichlet Model and the maximum entropy criterion to lear...
Learning from data ranges between extracting essentials from the data, to the more fundamental and v...
The expectation maximization (EM) algorithm is a popular algorithm for parameter estimation in model...
We investigate methods for parameter learning from incomplete data that isnot missing at random. Lik...
Incomplete data are a common feature in many domains, from clinical trials to industrial application...
AbstractWe present an algorithm for learning parameters of Bayesian networks from incomplete data. B...
We propose a Bayesian approach to learning Bayesian network models from incomplete data. The objec...
The majority of real-world problems require addressing incomplete data. The use of the structural ex...
Bayesian networks with mixtures of truncated exponentials (MTEs) are gaining popularity as a flexibl...
\u3cp\u3eThis chapter addresses the problem of estimating the parameters of a Bayesian network from ...
We propose an efficient family of algorithms to learn the parameters of a Bayesian network from inco...
We propose an efficient family of algorithms to learn the parameters of a Bayesian network from inco...
We propose a family of efficient algorithms for learning the parameters of a Bayesian network from i...
We investigate methods for parameter learning from incomplete data that is not missing at random. Li...
The creation of Bayesian networks often requires the specification of a large number of parameters, ...
\u3cp\u3eThis paper describes an Imprecise Dirichlet Model and the maximum entropy criterion to lear...
Learning from data ranges between extracting essentials from the data, to the more fundamental and v...
The expectation maximization (EM) algorithm is a popular algorithm for parameter estimation in model...
We investigate methods for parameter learning from incomplete data that isnot missing at random. Lik...
Incomplete data are a common feature in many domains, from clinical trials to industrial application...
AbstractWe present an algorithm for learning parameters of Bayesian networks from incomplete data. B...
We propose a Bayesian approach to learning Bayesian network models from incomplete data. The objec...
The majority of real-world problems require addressing incomplete data. The use of the structural ex...
Bayesian networks with mixtures of truncated exponentials (MTEs) are gaining popularity as a flexibl...