This paper introduces a new technique for feature selection and illustrates it on a real data set. Namely, the proposed ap-proach creates subsets of attributes based on two criteria: (1) individual attributes have high discrimination (classification) power; and (2) the attributes in the subset are complemen-tary- that is, they misclassify different classes. The method uses information from a confusion matrix and evaluates one attribute at a time. Keywords: classification, attribute selec-tion, confusion matrix, k-nearest neighbors; Background In classification problems, good accuracy in classification is the primary concern; however, the identification of the at-tributes (or features) having the largest separation power is also of interest....
The goal of feature selection is to find the optimal subset consisting of m features chosen from the...
When applying classifiers in real applications, the data imbalance often occurs when the number of e...
Feature weighting is known empirically to improve classification accuracy for k-nearest neighbor cla...
This paper introduces a new technique for feature selection and illustrates it on a real data set. N...
Abstract- The attribute reduction is one of the key processes for knowledge acquisition. Some data s...
Learning Classifier Systems (LCS) have not been widely applied to image recognition tasks due to the...
Dimensionality reduction of the problem space through detection and removal of variables, contributi...
Classification of data crosses different domains has been extensively researched and is one of the b...
The correctly classified data is reflected along the diagonal regions. The misclassified is reflecte...
Rough set theories are utilized in class-specific feature selection to improve the classification pe...
When a classification algorithm does not work on a data set, it is a non-trivial problem to figure o...
The aim of this paper is to discuss about various feature selection algorithms applied on different ...
Abstract. The attribute selection techniques for supervised learning, used in the preprocessing phas...
Data mining is the process of analyzing data from different perspectives and summarizing it into use...
Accuracy of a classifier or predictor is normally estimated with the help of confusion matrix, which...
The goal of feature selection is to find the optimal subset consisting of m features chosen from the...
When applying classifiers in real applications, the data imbalance often occurs when the number of e...
Feature weighting is known empirically to improve classification accuracy for k-nearest neighbor cla...
This paper introduces a new technique for feature selection and illustrates it on a real data set. N...
Abstract- The attribute reduction is one of the key processes for knowledge acquisition. Some data s...
Learning Classifier Systems (LCS) have not been widely applied to image recognition tasks due to the...
Dimensionality reduction of the problem space through detection and removal of variables, contributi...
Classification of data crosses different domains has been extensively researched and is one of the b...
The correctly classified data is reflected along the diagonal regions. The misclassified is reflecte...
Rough set theories are utilized in class-specific feature selection to improve the classification pe...
When a classification algorithm does not work on a data set, it is a non-trivial problem to figure o...
The aim of this paper is to discuss about various feature selection algorithms applied on different ...
Abstract. The attribute selection techniques for supervised learning, used in the preprocessing phas...
Data mining is the process of analyzing data from different perspectives and summarizing it into use...
Accuracy of a classifier or predictor is normally estimated with the help of confusion matrix, which...
The goal of feature selection is to find the optimal subset consisting of m features chosen from the...
When applying classifiers in real applications, the data imbalance often occurs when the number of e...
Feature weighting is known empirically to improve classification accuracy for k-nearest neighbor cla...