Feature selection has been widely used in machine learning and data mining since it can alleviate the burden of the so-called curse of dimensionality of high-dimensional data. However, in previous works, researchers have designed feature selection methods with the assumption that all the information from a data set can be observed. In this paper, we propose unsupervised and supervised feature selection methods for use with incomplete data, further introducing an L2,1 norm and a reconstruction error minimization method. Specifically, the proposed feature selection objective functions take advantage of an indicator matrix reflecting unobserved information in incomplete data sets, and we present pairwise constraints, minimizing the L2,1-norm-r...
In supervised learning scenarios, feature selection has been studied widely in the literature. Selec...
Spectral feature selection identifies relevant features by measuring their capability of preserving ...
Identifying a small number of features that can represent the data is a known problem that comes up ...
Feature selection is an important component of many machine learning applica-tions. Especially in ma...
22nd International Conference on Pattern Recognition, ICPR 2014, Sweden, 24-28 August 2014This paper...
Feature selection is an important preprocessing task for many machine learning and pattern recogniti...
Feature selection is an important preprocessing step in mining high-dimensional data. Generally, sup...
Abstract:This paper presents a classication system in which learning, feature selection, and classic...
Feature selection plays an important role in many machine learning and data mining applications. In ...
In real-world machine learning problems, it is very common that part of the input feature vector is ...
The problem of feature selection is critical in several areas of machine learning and data analysis ...
Compared with supervised learning for feature selection, it is much more difficult to select the dis...
Abstract—We address the incomplete-data problem in which feature vectors to be classified are missin...
There are a lot of redundant and irrelevant features in high-dimensional data,which seriously affect...
Spectral feature selection identifies relevant features by measuring their capability of preserving ...
In supervised learning scenarios, feature selection has been studied widely in the literature. Selec...
Spectral feature selection identifies relevant features by measuring their capability of preserving ...
Identifying a small number of features that can represent the data is a known problem that comes up ...
Feature selection is an important component of many machine learning applica-tions. Especially in ma...
22nd International Conference on Pattern Recognition, ICPR 2014, Sweden, 24-28 August 2014This paper...
Feature selection is an important preprocessing task for many machine learning and pattern recogniti...
Feature selection is an important preprocessing step in mining high-dimensional data. Generally, sup...
Abstract:This paper presents a classication system in which learning, feature selection, and classic...
Feature selection plays an important role in many machine learning and data mining applications. In ...
In real-world machine learning problems, it is very common that part of the input feature vector is ...
The problem of feature selection is critical in several areas of machine learning and data analysis ...
Compared with supervised learning for feature selection, it is much more difficult to select the dis...
Abstract—We address the incomplete-data problem in which feature vectors to be classified are missin...
There are a lot of redundant and irrelevant features in high-dimensional data,which seriously affect...
Spectral feature selection identifies relevant features by measuring their capability of preserving ...
In supervised learning scenarios, feature selection has been studied widely in the literature. Selec...
Spectral feature selection identifies relevant features by measuring their capability of preserving ...
Identifying a small number of features that can represent the data is a known problem that comes up ...