The approximation of a discrete probability distribution t by an M-type distribution p is considered. The approximation error is measured by the informational divergence D ( t ∥ p ) , which is an appropriate measure, e.g., in the context of data compression. Properties of the optimal approximation are derived and bounds on the approximation error are presented, which are asymptotically tight. A greedy algorithm is proposed that solves this M-type approximation problem optimally. Finally, it is shown that different instantiations of this algorithm minimize the informational divergence D ( p ∥ t ) or the variational distance ∥ p − t ∥ 1
We consider the problem of approximating the entropy of a discrete distribution under several models...
Given two probability distributions p = (p_1 ,p_2 ,...,p_n ) and q = (q_1 ,q_2 ,...,q_m ) of two dis...
An iterative method is presented which gives an optimum approximationto the joint probability distri...
The approximation of a discrete probability distribution t by an M-type distribution p i...
Abstract. We consider a problem that is related to the “Universal En-coding Problem ” from informati...
Estimation of Distribution Algorithms (EDA) have been proposed as an extension of genetic algorithms...
Given a probability distribution p = (p1., pn) and an integer m < n, what is the probability distrib...
The measurement and/or storage of high order probability distributions implies exponential increases...
The maximum entropy principle is a powerful tool for solving underdetermined inverse problems. This ...
Sparse representations of a function is a very powerful tool to analyze and approximate the function...
The maximum entropy principle is a powerful tool for solving underdetermined inverse problems. This ...
Let p be an unknown and arbitrary probability distribution over [0, 1). We con-sider the problem of ...
In the approximation theory we are commonly interested in finding a best possible approximant to a f...
Recent work has focused on the problem of nonparametric estimation of information divergence functio...
In Bayesian statistics probability distributions express beliefs. However, for many problems the bel...
We consider the problem of approximating the entropy of a discrete distribution under several models...
Given two probability distributions p = (p_1 ,p_2 ,...,p_n ) and q = (q_1 ,q_2 ,...,q_m ) of two dis...
An iterative method is presented which gives an optimum approximationto the joint probability distri...
The approximation of a discrete probability distribution t by an M-type distribution p i...
Abstract. We consider a problem that is related to the “Universal En-coding Problem ” from informati...
Estimation of Distribution Algorithms (EDA) have been proposed as an extension of genetic algorithms...
Given a probability distribution p = (p1., pn) and an integer m < n, what is the probability distrib...
The measurement and/or storage of high order probability distributions implies exponential increases...
The maximum entropy principle is a powerful tool for solving underdetermined inverse problems. This ...
Sparse representations of a function is a very powerful tool to analyze and approximate the function...
The maximum entropy principle is a powerful tool for solving underdetermined inverse problems. This ...
Let p be an unknown and arbitrary probability distribution over [0, 1). We con-sider the problem of ...
In the approximation theory we are commonly interested in finding a best possible approximant to a f...
Recent work has focused on the problem of nonparametric estimation of information divergence functio...
In Bayesian statistics probability distributions express beliefs. However, for many problems the bel...
We consider the problem of approximating the entropy of a discrete distribution under several models...
Given two probability distributions p = (p_1 ,p_2 ,...,p_n ) and q = (q_1 ,q_2 ,...,q_m ) of two dis...
An iterative method is presented which gives an optimum approximationto the joint probability distri...