In many important real world applications the initial representation of the data is inconvenient, or even prohibitive for further analysis. For example, in image analysis, text analysis and computational genetics high-dimensional, massive, structural, incomplete, and noisy data sets are common. Therefore, feature extraction, or revelation of informative features from the raw data is one of fundamental machine learning problems. Efficient feature extraction helps to understand data and the process that generates it, reduce costs for future measurements and data analysis. The representation of the structured data as a compact set of informative numeric features allows applying well studied machine learning techniques instead of developing new...
We introduce a framework for feature selection based on dependence maximization between the selected...
We introduce a framework for feature selection based on dependence maximization between the selected...
We propose a method for the approximation of high- or even infinite-dimensional feature vectors, whi...
In many important real world applications the initial representation of the data is inconvenient, or...
A key challenge in machine learning is to automatically extract relevant feature representations of ...
We introduce a framework for filtering features that employs the Hilbert-Schmidt Independence Criter...
We propose a feature extraction algorithm, based on the Hilbert–Schmidt independence criterion (HSIC...
The presence of irrelevant features in training data is a significant obstacle for many machine lear...
Feature extraction, or dimensionality reduction, is an essential part of many machine learning appli...
© 2017, Springer-Verlag Berlin Heidelberg. Measures of statistical dependence between random variabl...
This paper provides a new insight into unsupervised feature extraction techniques based on kernel su...
Kernel Principal Component Analysis (KPCA) has proven to be a versatile tool for unsupervised learni...
The goal of supervised feature selection is to find a subset of input features that are responsible ...
The goal of supervised feature selection is to find a subset of input features that are responsible ...
<p>The goal of supervised feature selection is to find a subset of input features that are responsib...
We introduce a framework for feature selection based on dependence maximization between the selected...
We introduce a framework for feature selection based on dependence maximization between the selected...
We propose a method for the approximation of high- or even infinite-dimensional feature vectors, whi...
In many important real world applications the initial representation of the data is inconvenient, or...
A key challenge in machine learning is to automatically extract relevant feature representations of ...
We introduce a framework for filtering features that employs the Hilbert-Schmidt Independence Criter...
We propose a feature extraction algorithm, based on the Hilbert–Schmidt independence criterion (HSIC...
The presence of irrelevant features in training data is a significant obstacle for many machine lear...
Feature extraction, or dimensionality reduction, is an essential part of many machine learning appli...
© 2017, Springer-Verlag Berlin Heidelberg. Measures of statistical dependence between random variabl...
This paper provides a new insight into unsupervised feature extraction techniques based on kernel su...
Kernel Principal Component Analysis (KPCA) has proven to be a versatile tool for unsupervised learni...
The goal of supervised feature selection is to find a subset of input features that are responsible ...
The goal of supervised feature selection is to find a subset of input features that are responsible ...
<p>The goal of supervised feature selection is to find a subset of input features that are responsib...
We introduce a framework for feature selection based on dependence maximization between the selected...
We introduce a framework for feature selection based on dependence maximization between the selected...
We propose a method for the approximation of high- or even infinite-dimensional feature vectors, whi...