In this thesis, we develop methods for constructing an A-weighted metric (x - y)' A( x - y) that improves the performance of K-nearest neighbor (KNN) classifiers. KNN is known to be highly flexible, but can be somewhat inefficient and unstable. By incorporating a parametrically optimized metric into KNN, global dimension reduction is carried out efficiently, leaving the most difficult nonlinear features of the problem to be solved on a low dimensional projected feature space. Optimization over A is done by formulating a probability model that captures KNN's essential property---using only a local neighborhood of training cases to predict the class of a test case. The expected correct vote margin can be calculated under the probability mode...
We consider the problem of learning a local metric to enhance the performance of nearest neighbor cl...
Abstract. The nearest neighbor classification/regression technique, be-sides its simplicity, is one ...
The standard kNN algorithm suffers from two major drawbacks: sensitivity to the parameter value k, i...
Abstract. The Nearest Neighbor (NN) classification/regression tech-niques, besides their simplicity,...
The Nearest Neighbor (NN) classification/regression techniques, besides their simplicity, are amongs...
The Nearest Neighbor (NN) classification/regression techniques, besides their sim-plicity, is one of...
This thesis is related to distance metric learning for kNN classification. We use the k nearest neig...
Metric learning has been shown to significantly improve the accuracy of k-nearest neighbor (kNN) cla...
We show how to learn aMahanalobis distance metric for k-nearest neigh-bor (kNN) classification by se...
We formulate the problem of metric learning for k nearest neighbor classification as a large margin ...
We formulate the problem of metric learning for k nearest neighbor classification as a large margin ...
A Nearest Neighbor (NN) classifier assumes class conditional probabilities to be locally smooth. Thi...
Nearest neighbor classification assumes locally constant class conditional probabilities. This assum...
We consider improving the performance of k-Nearest Neighbor classifiers. A reg-ularized kNN is propo...
Abstract—Many researches have been devoted to learn a Mahalanobis distance metric, which can effecti...
We consider the problem of learning a local metric to enhance the performance of nearest neighbor cl...
Abstract. The nearest neighbor classification/regression technique, be-sides its simplicity, is one ...
The standard kNN algorithm suffers from two major drawbacks: sensitivity to the parameter value k, i...
Abstract. The Nearest Neighbor (NN) classification/regression tech-niques, besides their simplicity,...
The Nearest Neighbor (NN) classification/regression techniques, besides their simplicity, are amongs...
The Nearest Neighbor (NN) classification/regression techniques, besides their sim-plicity, is one of...
This thesis is related to distance metric learning for kNN classification. We use the k nearest neig...
Metric learning has been shown to significantly improve the accuracy of k-nearest neighbor (kNN) cla...
We show how to learn aMahanalobis distance metric for k-nearest neigh-bor (kNN) classification by se...
We formulate the problem of metric learning for k nearest neighbor classification as a large margin ...
We formulate the problem of metric learning for k nearest neighbor classification as a large margin ...
A Nearest Neighbor (NN) classifier assumes class conditional probabilities to be locally smooth. Thi...
Nearest neighbor classification assumes locally constant class conditional probabilities. This assum...
We consider improving the performance of k-Nearest Neighbor classifiers. A reg-ularized kNN is propo...
Abstract—Many researches have been devoted to learn a Mahalanobis distance metric, which can effecti...
We consider the problem of learning a local metric to enhance the performance of nearest neighbor cl...
Abstract. The nearest neighbor classification/regression technique, be-sides its simplicity, is one ...
The standard kNN algorithm suffers from two major drawbacks: sensitivity to the parameter value k, i...