The MTE (Mixture of Truncated Exponentials) model allows to deal with Bayesian networks containing discrete and continuous variables simultaneously. This model offers an alternative to discretisation, since standard algorithms to compute the posterior probabilities in the network, in principle designed for discrete variables, can be directly applied over MTE models. In this paper, we study the problem of estimating these models from data. We propose an iterative algorithm based on least squares approximation. The performance of the algorithm is tested both with artificial and actual data
Mixtures of truncated exponentials (MTE) potentials are an alternative to discretization for solving...
Mixtures of truncated exponentials (MTE) potentials are an alternative to discretization for approxi...
Bayesian networks with mixtures of truncated exponentials (MTEs) support efficient inference algorit...
The MTE (mixture of truncated exponentials) model allows to deal with Bayesian networks containing d...
AbstractMixtures of truncated exponentials (MTE) potentials are an alternative to discretization for...
Mixtures of truncated exponentials (MTE) potentials are an alternative to discretization and Monte C...
Mixtures of truncated exponentials (MTE) potentials are an alternative to discretization and Monte C...
In this paper we introduce an algorithm for learning hybrid Bayesian networks from data. The result ...
Bayesian networks with mixtures of truncated exponentials (MTEs) support efficient inference algorit...
AbstractIn this paper we introduce an algorithm for learning hybrid Bayesian networks from data. The...
Bayesian networks with mixtures of truncated exponentials (MTEs) support efficient in-ference algori...
Mixtures of truncated exponentials (MTEs) are a powerful alternative to discretisation when working ...
AbstractMixtures of truncated exponentials (MTEs) are a powerful alternative to discretisation when ...
The main goal of this paper is to describe a method for exact inference in general hybrid Bayesian n...
Has been accepted for publication in the International Journal of Approximate Reasoning, Elsevier Sc...
Mixtures of truncated exponentials (MTE) potentials are an alternative to discretization for solving...
Mixtures of truncated exponentials (MTE) potentials are an alternative to discretization for approxi...
Bayesian networks with mixtures of truncated exponentials (MTEs) support efficient inference algorit...
The MTE (mixture of truncated exponentials) model allows to deal with Bayesian networks containing d...
AbstractMixtures of truncated exponentials (MTE) potentials are an alternative to discretization for...
Mixtures of truncated exponentials (MTE) potentials are an alternative to discretization and Monte C...
Mixtures of truncated exponentials (MTE) potentials are an alternative to discretization and Monte C...
In this paper we introduce an algorithm for learning hybrid Bayesian networks from data. The result ...
Bayesian networks with mixtures of truncated exponentials (MTEs) support efficient inference algorit...
AbstractIn this paper we introduce an algorithm for learning hybrid Bayesian networks from data. The...
Bayesian networks with mixtures of truncated exponentials (MTEs) support efficient in-ference algori...
Mixtures of truncated exponentials (MTEs) are a powerful alternative to discretisation when working ...
AbstractMixtures of truncated exponentials (MTEs) are a powerful alternative to discretisation when ...
The main goal of this paper is to describe a method for exact inference in general hybrid Bayesian n...
Has been accepted for publication in the International Journal of Approximate Reasoning, Elsevier Sc...
Mixtures of truncated exponentials (MTE) potentials are an alternative to discretization for solving...
Mixtures of truncated exponentials (MTE) potentials are an alternative to discretization for approxi...
Bayesian networks with mixtures of truncated exponentials (MTEs) support efficient inference algorit...