Abstract. We investigate the problem of supervised feature selection within the filtering framework. In our approach, applicable to the two-class problems, the feature strength is inversely proportional to the p-value of the null hypothesis that its class-conditional densities, p(X | Y = 0) and p(X | Y = 1), are identical. To estimate the p-values, we use Fisher’s permutation test combined with the four simple filtering criteria in the roles of test statistics: sample mean difference, symmetric Kullback-Leibler distance, information gain, and chi-square statistic. The experimental results of our study, performed using naive Bayes classifier and support vector machines, strongly indicate that the permutation test improves the above-mentioned...
Algorithms for feature selection fall into two broad categories: wrappers use the learning algorithm...
In literature there are several studies on the performance of Bayesian network structure learning al...
In typical machine learning frameworks, model selection is of fundamental impor-tance: commonly, mul...
The estimation of mutual information for feature selection is often subject to inaccuracies due to n...
Abstract We explore the framework of permutation-based p-values for assessing the performance of cla...
This work presents a content-based recommender system for machine learning classifier algorithms. Gi...
We introduce and explore an approach to estimating statisticalsignificance of classification accurac...
With the recent advancement of data collection techniques, there has been an explosive growth in the...
<p>Data are mean (SD); <i>P</i> (e<sub>act</sub>), <i>P</i> (sen<sub>act</sub>) and <i>P</i> (spec<s...
The problem of matching two sets of features appears in various tasks of computer vision and can be ...
Most techniques for attribute selection in decision trees are biased towards attributes with many va...
Many metaheuristic approaches are inherently stochastic. In order to compare such methods, statistic...
The problem of matching two sets of features appears in various tasks of computer vision and can be ...
This thesis addresses the problem of feature selection in pattern recognition. A detailed analysis a...
Feature subset selection is an essential pre-processing task in machine learning and pattern recogni...
Algorithms for feature selection fall into two broad categories: wrappers use the learning algorithm...
In literature there are several studies on the performance of Bayesian network structure learning al...
In typical machine learning frameworks, model selection is of fundamental impor-tance: commonly, mul...
The estimation of mutual information for feature selection is often subject to inaccuracies due to n...
Abstract We explore the framework of permutation-based p-values for assessing the performance of cla...
This work presents a content-based recommender system for machine learning classifier algorithms. Gi...
We introduce and explore an approach to estimating statisticalsignificance of classification accurac...
With the recent advancement of data collection techniques, there has been an explosive growth in the...
<p>Data are mean (SD); <i>P</i> (e<sub>act</sub>), <i>P</i> (sen<sub>act</sub>) and <i>P</i> (spec<s...
The problem of matching two sets of features appears in various tasks of computer vision and can be ...
Most techniques for attribute selection in decision trees are biased towards attributes with many va...
Many metaheuristic approaches are inherently stochastic. In order to compare such methods, statistic...
The problem of matching two sets of features appears in various tasks of computer vision and can be ...
This thesis addresses the problem of feature selection in pattern recognition. A detailed analysis a...
Feature subset selection is an essential pre-processing task in machine learning and pattern recogni...
Algorithms for feature selection fall into two broad categories: wrappers use the learning algorithm...
In literature there are several studies on the performance of Bayesian network structure learning al...
In typical machine learning frameworks, model selection is of fundamental impor-tance: commonly, mul...