The ƒ-entropy family of information measures: u(P1 ,…, pm) = Σf(pk), ƒ concave (e.g., Shannon (1948)Bell Syst. Tech. J. 27, 379–423, 623–656; Suadratic; Daroczy (1970)Inform. Contr. 16, 36–51; etc.), is considered. Characterization of the tightest upper and lower bounds on ƒ-entropies by means of the probability of error, is presented. These bounds are used to derive the dual bounds, i.e., the tightest lower and upper bounds on the probability of error by means of ƒ-entropies. Concerning the use of ƒ-entropies as a tool for feature selection, it is proved that none of the members of this family induce over an arbitrary set of features the same preference order as does the probability of error rule
The demands for machine learning and knowledge extraction methods have been booming due to the unpre...
In many practical situations, we have only partial information about the probabilities. In some case...
Fanos inequality is a sharp upper bound on conditional entropy in terms of the probability of error....
The ƒ-entropy family of information measures: u(P1 ,…, pm) = Σf(pk), ƒ concave (e.g., Shannon (1948)...
Fano’s inequality has proven to be one important result in Shannon’s information theory having found...
International audienceWe consider the maximum entropy problems associated with Rényi $Q$-entropy, su...
Abstract—Fano’s inequality relates the error probability and conditional entropy of a finitely-value...
This paper is part of a general study of efficient information selection, storage and processing. It...
We show how to determine the maximum and minimum possible values of one measure of entropy for a giv...
Abstract. Fano’s inequality has proven to be one important result in Shannon’s information theory ha...
We revisit the problem of estimating entropy of discrete distributions from independent samples, stu...
Entropy and conditional mutual information are the key quantities information theory provides to mea...
International audienceThis chapter focuses on the notions of entropy and of maximum entropy distribu...
Many algorithms of machine learning use an entropy measure as optimization criterion. Among the wide...
PAE cannot be made a basis for either a generalized statistical mechanics or a generalized informati...
The demands for machine learning and knowledge extraction methods have been booming due to the unpre...
In many practical situations, we have only partial information about the probabilities. In some case...
Fanos inequality is a sharp upper bound on conditional entropy in terms of the probability of error....
The ƒ-entropy family of information measures: u(P1 ,…, pm) = Σf(pk), ƒ concave (e.g., Shannon (1948)...
Fano’s inequality has proven to be one important result in Shannon’s information theory having found...
International audienceWe consider the maximum entropy problems associated with Rényi $Q$-entropy, su...
Abstract—Fano’s inequality relates the error probability and conditional entropy of a finitely-value...
This paper is part of a general study of efficient information selection, storage and processing. It...
We show how to determine the maximum and minimum possible values of one measure of entropy for a giv...
Abstract. Fano’s inequality has proven to be one important result in Shannon’s information theory ha...
We revisit the problem of estimating entropy of discrete distributions from independent samples, stu...
Entropy and conditional mutual information are the key quantities information theory provides to mea...
International audienceThis chapter focuses on the notions of entropy and of maximum entropy distribu...
Many algorithms of machine learning use an entropy measure as optimization criterion. Among the wide...
PAE cannot be made a basis for either a generalized statistical mechanics or a generalized informati...
The demands for machine learning and knowledge extraction methods have been booming due to the unpre...
In many practical situations, we have only partial information about the probabilities. In some case...
Fanos inequality is a sharp upper bound on conditional entropy in terms of the probability of error....