An iterative method is presented which gives an optimum approximationto the joint probability distribution of a set of binary variables given the joint probability distributions of any subsets of the variables (any set of component distributions). The most significant feature of this approximation procedure is that there is no limitation to the number or type of component distributions that can be employed. Each step of the iteration gives an improved approximation, and the procedure converges to give an approximation that is the minimum information (i.e. maximum entropy) extension of the component distributions employed
Computing and storing probabilities is a hard problem as soon as one has to deal with complex distri...
We give optimal algorithms for generating discrete random variables for changing distributions. We ...
AbstractWe produce a positive approximation of a probability density in [0,1] when only a finite num...
We test the accuracy of various methods for approximating underspecified joint probability distribut...
Given two discrete random variables X and Y, with probability distributions p = (p1,..., pn) and q =...
The measurement and/or storage of high order probability distributions implies exponential increases...
Practical computational limits for stochastic decision analysis models often require that probabilit...
Given two probability distributions p = (p_1 ,p_2 ,...,p_n ) and q = (q_1 ,q_2 ,...,q_m ) of two dis...
In this paper, we propose new methods to approximate probability distributions that are incompletely...
Given two discrete random variables X and Y, with probability distributions p = (p(1), . . . , p(n))...
The maximum entropy principle is a powerful tool for solving underdetermined inverse problems. This ...
The simmetric discrete probability distribution approximation by maximum entropy approach, whose the...
[[abstract]]The paper considers the role of entropy and other information theoretic concepts in the ...
Estimation of Distribution Algorithms (EDA) have been proposed as an extension of genetic algorithms...
A theorem that gives the minimum number of values of a discrete probability distribution, which must...
Computing and storing probabilities is a hard problem as soon as one has to deal with complex distri...
We give optimal algorithms for generating discrete random variables for changing distributions. We ...
AbstractWe produce a positive approximation of a probability density in [0,1] when only a finite num...
We test the accuracy of various methods for approximating underspecified joint probability distribut...
Given two discrete random variables X and Y, with probability distributions p = (p1,..., pn) and q =...
The measurement and/or storage of high order probability distributions implies exponential increases...
Practical computational limits for stochastic decision analysis models often require that probabilit...
Given two probability distributions p = (p_1 ,p_2 ,...,p_n ) and q = (q_1 ,q_2 ,...,q_m ) of two dis...
In this paper, we propose new methods to approximate probability distributions that are incompletely...
Given two discrete random variables X and Y, with probability distributions p = (p(1), . . . , p(n))...
The maximum entropy principle is a powerful tool for solving underdetermined inverse problems. This ...
The simmetric discrete probability distribution approximation by maximum entropy approach, whose the...
[[abstract]]The paper considers the role of entropy and other information theoretic concepts in the ...
Estimation of Distribution Algorithms (EDA) have been proposed as an extension of genetic algorithms...
A theorem that gives the minimum number of values of a discrete probability distribution, which must...
Computing and storing probabilities is a hard problem as soon as one has to deal with complex distri...
We give optimal algorithms for generating discrete random variables for changing distributions. We ...
AbstractWe produce a positive approximation of a probability density in [0,1] when only a finite num...