Probability density functions (pdf\u27s) of high dimensionality are impractical to estimate from real data. For accurate estimation, the dimensionality of the pdf can be at most 5-10. In order to reduce the dimensionality a sufficient statistic is usually employed. When none is available, there is no universal agreement on how to proceed. We show how to construct a high-dimension pdf based on the pdf of a low-dimensional statistic that is closest to the true one in the sense of divergence. The latter criterion asymptotically minimizes the probability of error in a decision rule. An application to feature selection for classification is described. © 2006 IEEE
Abstract—This paper presents an efficient approach to cal-culate the difference between two probabil...
Methods for directly estimating the ratio of two probability density functions without going through...
We consider, in the modern setting of high-dimensional statistics, the classic problem of optimizing...
Probability density functions (pdf\u27s) of high dimensionality are impractical to estimate from rea...
This paper addresses the problem of calculating the multidimensional probability density functions (...
The ratio of two probability density functions is becoming a quantity of interest these days in the ...
This thesis documents three different contributions in statistical learning theory. They were develo...
Abstract—In this paper, we present the theoretical foundation for optimal classification using class...
AbstractWe consider in this paper a set of kp-variates from which k unbiased estimates of a multidim...
Many existing engineering works model the statistical characteristics of the entities under study as...
Classical asymptotic theory for statistical inference usually involves calibrating a statistic by fi...
The approximation of a discrete probability distribution t by an M-type distribution p i...
Abstract—In this paper, we present the theoretical foundation for optimal classification using class...
In this paper, we shall optimize the efficiency of Metropolis algorithms for multidimensional target...
We propose to approximate the conditional density function of a random variable Y given a dependent ...
Abstract—This paper presents an efficient approach to cal-culate the difference between two probabil...
Methods for directly estimating the ratio of two probability density functions without going through...
We consider, in the modern setting of high-dimensional statistics, the classic problem of optimizing...
Probability density functions (pdf\u27s) of high dimensionality are impractical to estimate from rea...
This paper addresses the problem of calculating the multidimensional probability density functions (...
The ratio of two probability density functions is becoming a quantity of interest these days in the ...
This thesis documents three different contributions in statistical learning theory. They were develo...
Abstract—In this paper, we present the theoretical foundation for optimal classification using class...
AbstractWe consider in this paper a set of kp-variates from which k unbiased estimates of a multidim...
Many existing engineering works model the statistical characteristics of the entities under study as...
Classical asymptotic theory for statistical inference usually involves calibrating a statistic by fi...
The approximation of a discrete probability distribution t by an M-type distribution p i...
Abstract—In this paper, we present the theoretical foundation for optimal classification using class...
In this paper, we shall optimize the efficiency of Metropolis algorithms for multidimensional target...
We propose to approximate the conditional density function of a random variable Y given a dependent ...
Abstract—This paper presents an efficient approach to cal-culate the difference between two probabil...
Methods for directly estimating the ratio of two probability density functions without going through...
We consider, in the modern setting of high-dimensional statistics, the classic problem of optimizing...