International audience<p>A nontrivial linear mixture of independent random variables of fixed entropies has minimum entropy only for Gaussian distributions. This "minimum entropy principle" was first stated for two variables by Shannon in 1948 in the form of the entropy-power inequality which has long been proven useful for deriving converse multiuser coding theorems. It was also also applied to deconvolution problems by Donoho and generalized to linear transformations by Zamir and Feder, and more recently to Rényi entropies with different formulations by Bobkov and Chistyakov and by Ram and Sason. Available proofs involve either the integration over a path of Gaussian perturbation of Fisher information or minimum mean-squared err...
International audienceThe Maximum Entropy Principle (MEP) maximizes the entropy provided the effort ...
The principle of maximum entropy is a general method to assign values to probability distributions o...
Shannon entropy of a probability distribution gives a weighted mean of a measure of information that...
International audienceA framework for deriving Rényi entropy-power inequalities (EPIs) is presented ...
We prove the following generalization of the Entropy Power Inequality: h(Ax) h(A~x) where h(\Delt...
International audienceThis chapter focuses on the notions of entropy and of maximum entropy distribu...
Optimal transport is a powerful tool for proving entropy-entropy production inequalities related to ...
Within the framework of linear vector Gaussian channels with arbitrary signaling, the Jacobian of th...
A long-standing open problem in quantum information theory is to find the classical capacity of an o...
summary:How low can the joint entropy of $n$ $d$-wise independent (for $d\geq 2$) discrete random va...
International audienceAs discovered by Brenier, mapping through a convex gradient gives the optimal ...
As discovered by Brenier, mapping through a convex gradient gives the optimal transport in Rn. In th...
The maximum entropy principle provides one of the bases for specification of complete models from pa...
International audienceWe describe some analogy between optimal transport and the Schrödinger problem...
Recently, maximum entropy calculations have become associated with solutions of the single particle ...
International audienceThe Maximum Entropy Principle (MEP) maximizes the entropy provided the effort ...
The principle of maximum entropy is a general method to assign values to probability distributions o...
Shannon entropy of a probability distribution gives a weighted mean of a measure of information that...
International audienceA framework for deriving Rényi entropy-power inequalities (EPIs) is presented ...
We prove the following generalization of the Entropy Power Inequality: h(Ax) h(A~x) where h(\Delt...
International audienceThis chapter focuses on the notions of entropy and of maximum entropy distribu...
Optimal transport is a powerful tool for proving entropy-entropy production inequalities related to ...
Within the framework of linear vector Gaussian channels with arbitrary signaling, the Jacobian of th...
A long-standing open problem in quantum information theory is to find the classical capacity of an o...
summary:How low can the joint entropy of $n$ $d$-wise independent (for $d\geq 2$) discrete random va...
International audienceAs discovered by Brenier, mapping through a convex gradient gives the optimal ...
As discovered by Brenier, mapping through a convex gradient gives the optimal transport in Rn. In th...
The maximum entropy principle provides one of the bases for specification of complete models from pa...
International audienceWe describe some analogy between optimal transport and the Schrödinger problem...
Recently, maximum entropy calculations have become associated with solutions of the single particle ...
International audienceThe Maximum Entropy Principle (MEP) maximizes the entropy provided the effort ...
The principle of maximum entropy is a general method to assign values to probability distributions o...
Shannon entropy of a probability distribution gives a weighted mean of a measure of information that...