In this paper, we investigate the problem of classifying feature vectors with mutually independent but non-identically distributed elements that take values from a finite alphabet set. First, we show the importance of this problem. Next, we propose a classifier and derive an analytical upper bound on its error probability. We show that the error probability moves to zero as the length of the feature vectors grows, even when there is only one training feature vector per label available. Thereby, we show that for this important problem at least one asymptotically optimal classifier exists. Finally, we provide numerical examples where we show that the performance of the proposed classifier outperforms conventional classification algorithms whe...
Many classification problems may be difficult to formulate in the tra-ditional supervised setting, w...
This thesis focuses on the binary classification problem with training data under both classes. We f...
We address the problem of general supervised learning when data can only be ac-cessed through an (in...
summary:In this paper the possibilities are discussed for training statistical pattern recognizers b...
In many real-world classification problems, the labels of training examples are randomly corrupted. ...
We study the effect of imperfect training data labels on the performance of classification methods. ...
This thesis concerns the development and mathematical analysis of statistical procedures for classi...
International audienceIn many real-world classification problems, the labels of training examples ar...
One of the advantages of supervised learning is that the final error metric is available during trai...
This paper considers the problem of learning optimal discriminant functions for pattern classificati...
IEEE Repurposing tools and intuitions from Shannon theory, we present fundamental limits on the reli...
We introduce a new model addressing feature selection from a large dictionary of variables that can ...
Abstract—We study high-dimensional asymptotic performance limits of binary supervised classification...
This paper investigates a new approach for training discriminant classifiers when only a small set o...
When choosing a classification rule, it is important to take into account the amount of sample data ...
Many classification problems may be difficult to formulate in the tra-ditional supervised setting, w...
This thesis focuses on the binary classification problem with training data under both classes. We f...
We address the problem of general supervised learning when data can only be ac-cessed through an (in...
summary:In this paper the possibilities are discussed for training statistical pattern recognizers b...
In many real-world classification problems, the labels of training examples are randomly corrupted. ...
We study the effect of imperfect training data labels on the performance of classification methods. ...
This thesis concerns the development and mathematical analysis of statistical procedures for classi...
International audienceIn many real-world classification problems, the labels of training examples ar...
One of the advantages of supervised learning is that the final error metric is available during trai...
This paper considers the problem of learning optimal discriminant functions for pattern classificati...
IEEE Repurposing tools and intuitions from Shannon theory, we present fundamental limits on the reli...
We introduce a new model addressing feature selection from a large dictionary of variables that can ...
Abstract—We study high-dimensional asymptotic performance limits of binary supervised classification...
This paper investigates a new approach for training discriminant classifiers when only a small set o...
When choosing a classification rule, it is important to take into account the amount of sample data ...
Many classification problems may be difficult to formulate in the tra-ditional supervised setting, w...
This thesis focuses on the binary classification problem with training data under both classes. We f...
We address the problem of general supervised learning when data can only be ac-cessed through an (in...