The nearest subspace methods (NSM) are a category of classification methods widely applied to classify high-dimensional data. In this paper, we propose to improve the classification performance of NSM through learning tailored distance metrics from samples to class subspaces. The learned distance metric is termed as ‘learned distance to subspace’ (LD2S). Using LD2S in the classification rule of NSM can make the samples closer to their correct class subspaces while farther away from their wrong class subspaces. In this way, the classification task becomes easier and the classification performance of NSM can be improved. The superior classification performance of using LD2S for NSM is demonstrated on three real-world high-dimensional spectral...
Abstract. This paper introduces a semi-supervised distance metric learning al-gorithm which uses pai...
Subspace learning approaches aim to discover important statistical distribution on lower dimensions ...
Datasets with significantly larger number of features, compared to samples, pose a serious challenge...
The nearest subspace methods (NSM) are a category of classification methods widely applied to classi...
The nearest subspace methods (NSM) are a category of classification methods widely applied to classi...
The nearest subspace methods (NSM) are a category of classification methods widely applied to classi...
The nearest subspace methods (NSM) are a category of classification methods widely applied to classi...
Subspace clustering is a challenging task in the field of data mining. Traditional distance measures...
The orthogonal distance from an instance to the subspace of a class is a key metric for pattern clas...
A distance based classification is one of the popular methods for classifying instances using a poin...
National Natural Science Foundation of China [61174161]; Specialized Research Fund for the Doctoral ...
The orthogonal distance from an instance to the subspace of a class is a key metric for pattern clas...
Graduation date: 1995Distance-based algorithms are machine learning algorithms that classify queries...
In this paper we analyze the impact of distinct distance metrics in instance-based learning algorith...
In this paper, we establish a novel separating hyperplane classification (SHC) framework to unify th...
Abstract. This paper introduces a semi-supervised distance metric learning al-gorithm which uses pai...
Subspace learning approaches aim to discover important statistical distribution on lower dimensions ...
Datasets with significantly larger number of features, compared to samples, pose a serious challenge...
The nearest subspace methods (NSM) are a category of classification methods widely applied to classi...
The nearest subspace methods (NSM) are a category of classification methods widely applied to classi...
The nearest subspace methods (NSM) are a category of classification methods widely applied to classi...
The nearest subspace methods (NSM) are a category of classification methods widely applied to classi...
Subspace clustering is a challenging task in the field of data mining. Traditional distance measures...
The orthogonal distance from an instance to the subspace of a class is a key metric for pattern clas...
A distance based classification is one of the popular methods for classifying instances using a poin...
National Natural Science Foundation of China [61174161]; Specialized Research Fund for the Doctoral ...
The orthogonal distance from an instance to the subspace of a class is a key metric for pattern clas...
Graduation date: 1995Distance-based algorithms are machine learning algorithms that classify queries...
In this paper we analyze the impact of distinct distance metrics in instance-based learning algorith...
In this paper, we establish a novel separating hyperplane classification (SHC) framework to unify th...
Abstract. This paper introduces a semi-supervised distance metric learning al-gorithm which uses pai...
Subspace learning approaches aim to discover important statistical distribution on lower dimensions ...
Datasets with significantly larger number of features, compared to samples, pose a serious challenge...