This paper explores the e↵ects of parameter sharing on Bayesian network (BN) parameter learning when there is incomplete data. Us-ing the Expectation Maximization (EM) al-gorithm, we investigate how varying degrees of parameter sharing, varying number of hid-den nodes, and di↵erent dataset sizes impact EM performance. The specific metrics of EM performance examined are: likelihood, error, and the number of iterations required for convergence. These metrics are important in a number of applications, and we empha-size learning of BNs for diagnosis of electrical power systems. One main point, which we investigate both analytically and empirically, is how parameter sharing impacts the error associated with EM’s parameter estimates.
We compare three approaches to learning numerical parameters of discrete Bayesian networks from cont...
The task of learning models for many real-world problems requires incorporating domain knowledge in...
Domain experts can often quite reliably specify the sign of influences between variables in a Bayesi...
Bayesian network (BN) parameter learning from incomplete data can be a computationally expensive tas...
This work applies the distributed computing framework MapReduce to Bayesian network parameter learni...
This paper addresses the estimation of parameters of a Bayesian network from incomplete data. The ta...
Incomplete data are a common feature in many domains, from clinical trials to industrial application...
The expectation maximization (EM) algorithm is a popular algorithm for parameter estimation in model...
AbstractIt is possible to learn the parameters of a given Bayesian network structure from data becau...
We compare three approaches to learning numerical parameters of Bayesian networks from continuous da...
Bayesian networks with mixtures of truncated exponentials (MTEs) are gaining popularity as a flexibl...
This paper re-examines the problem of parameter estimation in Bayesian networks with missing values ...
The creation of Bayesian networks often requires the specification of a large number of parameters, ...
The expectation maximization (EM) algo-rithm is a popular algorithm for parame-ter estimation in mod...
We propose an efficient family of algorithms to learn the parameters of a Bayesian network from inco...
We compare three approaches to learning numerical parameters of discrete Bayesian networks from cont...
The task of learning models for many real-world problems requires incorporating domain knowledge in...
Domain experts can often quite reliably specify the sign of influences between variables in a Bayesi...
Bayesian network (BN) parameter learning from incomplete data can be a computationally expensive tas...
This work applies the distributed computing framework MapReduce to Bayesian network parameter learni...
This paper addresses the estimation of parameters of a Bayesian network from incomplete data. The ta...
Incomplete data are a common feature in many domains, from clinical trials to industrial application...
The expectation maximization (EM) algorithm is a popular algorithm for parameter estimation in model...
AbstractIt is possible to learn the parameters of a given Bayesian network structure from data becau...
We compare three approaches to learning numerical parameters of Bayesian networks from continuous da...
Bayesian networks with mixtures of truncated exponentials (MTEs) are gaining popularity as a flexibl...
This paper re-examines the problem of parameter estimation in Bayesian networks with missing values ...
The creation of Bayesian networks often requires the specification of a large number of parameters, ...
The expectation maximization (EM) algo-rithm is a popular algorithm for parame-ter estimation in mod...
We propose an efficient family of algorithms to learn the parameters of a Bayesian network from inco...
We compare three approaches to learning numerical parameters of discrete Bayesian networks from cont...
The task of learning models for many real-world problems requires incorporating domain knowledge in...
Domain experts can often quite reliably specify the sign of influences between variables in a Bayesi...