The connection between compression and the estimation of probability distributions has long been known for the case of discrete alphabet sources and lossless coding. A universal lossless code which does a good job of compressing must implicitly also do a good job of modeling. In particular, with a collection of codebooks, one for each possible class or model, if codewords are chosen from among the ensemble of codebooks so as to minimize bit rate, then the codebook selected provides an implicit estimate of the underlying class. Less is known about the corresponding connections between lossy compression and continuous sources. Here we consider aspects of estimating conditional and unconditional densities in conjunction with Bayes-risk weighte...
Due to the rapidly increasing need for methods of data compression, quantization has become a flouri...
Many important data analysis tasks can be addressed by formulating them as probability estimation pr...
This study is divided into two parts. The first part involves an investigation of near-lossless comp...
We characterize the best achievable performance of lossy compression algorithms operating on arbitra...
The development of a universal lossy data compression model based on a lossy version of the Kraft in...
Abstract—We characterize the best achievable performance of lossy compression algorithms operating o...
Abstract:- We are interested in the vector quantization problem. Many researches focus on finding a ...
A data compression system using vector quantization utilises a codebook or index tables constructed ...
Classification and compression play important roles in communicating digital information. Their comb...
[[abstract]]In a memoryless vector quantization system, each image block is independently encoded as...
This work demonstrates a formal connection between density estimation with a data-rate constraint an...
We study topics in source coding, and vector quantization (VQ) in particular. We approach VQ from tw...
We study topics in source coding, and vector quantization (VQ) in particular. We approach VQ from tw...
We consider the problem of joint universal variable-rate lossy coding and identification for paramet...
Many regression schemes deliver a point estimate only, but often it is useful or even essential to q...
Due to the rapidly increasing need for methods of data compression, quantization has become a flouri...
Many important data analysis tasks can be addressed by formulating them as probability estimation pr...
This study is divided into two parts. The first part involves an investigation of near-lossless comp...
We characterize the best achievable performance of lossy compression algorithms operating on arbitra...
The development of a universal lossy data compression model based on a lossy version of the Kraft in...
Abstract—We characterize the best achievable performance of lossy compression algorithms operating o...
Abstract:- We are interested in the vector quantization problem. Many researches focus on finding a ...
A data compression system using vector quantization utilises a codebook or index tables constructed ...
Classification and compression play important roles in communicating digital information. Their comb...
[[abstract]]In a memoryless vector quantization system, each image block is independently encoded as...
This work demonstrates a formal connection between density estimation with a data-rate constraint an...
We study topics in source coding, and vector quantization (VQ) in particular. We approach VQ from tw...
We study topics in source coding, and vector quantization (VQ) in particular. We approach VQ from tw...
We consider the problem of joint universal variable-rate lossy coding and identification for paramet...
Many regression schemes deliver a point estimate only, but often it is useful or even essential to q...
Due to the rapidly increasing need for methods of data compression, quantization has become a flouri...
Many important data analysis tasks can be addressed by formulating them as probability estimation pr...
This study is divided into two parts. The first part involves an investigation of near-lossless comp...