Bayesian networks with mixtures of truncated exponentials (MTEs) support efficient in-ference algorithms and provide a flexible way of modeling hybrid domains. On the other hand, estimating an MTE from data has turned out to be a difficult task, and most preva-lent learning methods treat parameter estimation as a regression problem. The drawback of this approach is that by not directly attempting to find the parameters that maximize the likelihood, there is no principled way of e.g. performing subsequent model selection using those parameters. In this paper we describe an estimation method that directly aims at learning the maximum likelihood parameters of an MTE potential. Empirical results demonstrate that the proposed method yields signi...
In this paper we introduce a hill-climbing algorithm for structural learning of Bayesian networks fr...
Abstract. This paper presents uses mixtures of truncated exponentials (MTE) potentials in two applic...
Has been accepted for publication in the International Journal of Approximate Reasoning, Elsevier Sc...
Bayesian networks with mixtures of truncated exponentials (MTEs) support efficient inference algorit...
Bayesian networks with mixtures of truncated exponentials (MTEs) support efficient inference algorit...
AbstractBayesian networks with mixtures of truncated exponentials (MTEs) support efficient inference...
Bayesian networks with mixtures of truncated exponentials (MTEs) are gaining popularity as a flexibl...
The MTE (mixture of truncated exponentials) model allows to deal with Bayesian networks containing d...
The MTE (Mixture of Truncated Exponentials) model allows to deal with Bayesian networks containing d...
Mixtures of truncated exponentials (MTE) potentials are an alternative to discretization and Monte C...
In this paper we introduce an algorithm for learning hybrid Bayesian networks from data. The result ...
Mixtures of truncated exponentials (MTE) potentials are an alternative to discretization and Monte C...
AbstractIn this paper we introduce an algorithm for learning hybrid Bayesian networks from data. The...
AbstractMixtures of truncated exponentials (MTE) potentials are an alternative to discretization for...
Mixtures of truncated exponentials (MTE) potentials are an alternative to discretization for approxi...
In this paper we introduce a hill-climbing algorithm for structural learning of Bayesian networks fr...
Abstract. This paper presents uses mixtures of truncated exponentials (MTE) potentials in two applic...
Has been accepted for publication in the International Journal of Approximate Reasoning, Elsevier Sc...
Bayesian networks with mixtures of truncated exponentials (MTEs) support efficient inference algorit...
Bayesian networks with mixtures of truncated exponentials (MTEs) support efficient inference algorit...
AbstractBayesian networks with mixtures of truncated exponentials (MTEs) support efficient inference...
Bayesian networks with mixtures of truncated exponentials (MTEs) are gaining popularity as a flexibl...
The MTE (mixture of truncated exponentials) model allows to deal with Bayesian networks containing d...
The MTE (Mixture of Truncated Exponentials) model allows to deal with Bayesian networks containing d...
Mixtures of truncated exponentials (MTE) potentials are an alternative to discretization and Monte C...
In this paper we introduce an algorithm for learning hybrid Bayesian networks from data. The result ...
Mixtures of truncated exponentials (MTE) potentials are an alternative to discretization and Monte C...
AbstractIn this paper we introduce an algorithm for learning hybrid Bayesian networks from data. The...
AbstractMixtures of truncated exponentials (MTE) potentials are an alternative to discretization for...
Mixtures of truncated exponentials (MTE) potentials are an alternative to discretization for approxi...
In this paper we introduce a hill-climbing algorithm for structural learning of Bayesian networks fr...
Abstract. This paper presents uses mixtures of truncated exponentials (MTE) potentials in two applic...
Has been accepted for publication in the International Journal of Approximate Reasoning, Elsevier Sc...