In this paper we describe a new method for learning hybrid Bayesian network models from data. The method utilizes a kernel density estimator, which is in turn “translated” into a mixture of truncated basis functions-representation using a convex optimization technique. We argue that these estimators approximate the maximum likelihood estimators, and compare our approach to previous attempts at learning hybrid Bayesian networks from data. We conclude that while the present method produces estimators that are slightly poorer than the state of the art (in terms of log likelihood), it is significantly faster
Bayesian networks with mixtures of truncated exponentials (MTEs) are gaining popularity as a flexibl...
Mixtures of truncated exponentials (MTE) potentials are an alternative to discretization and Monte C...
Mixtures of polynomials (MoPs) are a non-parametric density estimation technique especially designed...
In this paper we investigate methods for learning hybrid Bayesian networks from data. First we utili...
In this paper we study the problem of exact inference in hybrid Bayesian networks using mixtures of ...
In this paper we propose a framework, called mixtures of truncated basis functions (MoTBFs), for rep...
AbstractIn this paper we propose a framework, called mixtures of truncated basis functions (MoTBFs),...
Mixtures of polynomials (MoPs) are a non-parametric density estimation technique for hybrid Bayesian...
In this paper we introduce an algorithm for learning hybrid Bayesian networks from data. The result ...
AbstractIn this paper we introduce an algorithm for learning hybrid Bayesian networks from data. The...
In this paper we propose a framework, called mixtures of truncated basis functions (MoTBFs), for rep...
Bayesian networks with mixtures of truncated exponentials (MTEs) support efficient inference algorit...
Mixtures of Truncated Basis Functions (MoTBFs) have recently been proposed for modelling univariate...
Hybrid Bayesian networks efficiently encode a joint probability distribution over a set of continuou...
Mixtures of polynomials (MoPs) are a non-parametric density estimation technique for hybrid Bayesian...
Bayesian networks with mixtures of truncated exponentials (MTEs) are gaining popularity as a flexibl...
Mixtures of truncated exponentials (MTE) potentials are an alternative to discretization and Monte C...
Mixtures of polynomials (MoPs) are a non-parametric density estimation technique especially designed...
In this paper we investigate methods for learning hybrid Bayesian networks from data. First we utili...
In this paper we study the problem of exact inference in hybrid Bayesian networks using mixtures of ...
In this paper we propose a framework, called mixtures of truncated basis functions (MoTBFs), for rep...
AbstractIn this paper we propose a framework, called mixtures of truncated basis functions (MoTBFs),...
Mixtures of polynomials (MoPs) are a non-parametric density estimation technique for hybrid Bayesian...
In this paper we introduce an algorithm for learning hybrid Bayesian networks from data. The result ...
AbstractIn this paper we introduce an algorithm for learning hybrid Bayesian networks from data. The...
In this paper we propose a framework, called mixtures of truncated basis functions (MoTBFs), for rep...
Bayesian networks with mixtures of truncated exponentials (MTEs) support efficient inference algorit...
Mixtures of Truncated Basis Functions (MoTBFs) have recently been proposed for modelling univariate...
Hybrid Bayesian networks efficiently encode a joint probability distribution over a set of continuou...
Mixtures of polynomials (MoPs) are a non-parametric density estimation technique for hybrid Bayesian...
Bayesian networks with mixtures of truncated exponentials (MTEs) are gaining popularity as a flexibl...
Mixtures of truncated exponentials (MTE) potentials are an alternative to discretization and Monte C...
Mixtures of polynomials (MoPs) are a non-parametric density estimation technique especially designed...