We present an effective method for supervised feature construction. The main goal of the approach is to construct a feature representation for which a set of linear hypotheses is of sufficient capacity -- large enough to contain a satisfactory solution to the considered problem and small enough to allow good generalization from a small number of training examples. We achieve this goal with a greedy procedure that constructs features by empirically fitting squared error residuals. The proposed constructive procedure is consistent and can output a rich set of features. The effectiveness of the approach is evaluated empirically by fitting a linear ridge regression model in the constructed feature space and our empirical results indicate a supe...
The generalization error of a function approximator, feature set or smoother can be estimated direct...
We describe and analyze efficient algorithms for learning a linear predictor from examples when the ...
The increasing size of available data has led machine learning specialists to consider more complex ...
We present an effective method for supervised feature construction. The main goal of the approach is...
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer S...
In many prediction problems, it is not uncommon that the number of variables used to construct a for...
This paper describes a machine learning method, called Regression by Selecting Best Feature Projecti...
Machine Learning (ML) requires a certain number of features (i.e., attributes) to train the model. O...
Feature construction can substantially improve the accuracy of Machine Learning (ML) algorithms. Gen...
Baudat and Anouar [1] propose a simple greedy algorithm for estimation of an approximate basis of th...
Feature engineering is a crucial step in the process of predictive modeling. It involves the transfo...
This paper describes a machine learning method, called Regression on Feature Projections (RFP), for ...
A key challenge in machine learning is to automatically extract relevant feature representations of ...
In many classification tasks training data have missing feature values that can be acquired at a cos...
We consider the problem of learning sparse linear models for multi-label prediction tasks under a...
The generalization error of a function approximator, feature set or smoother can be estimated direct...
We describe and analyze efficient algorithms for learning a linear predictor from examples when the ...
The increasing size of available data has led machine learning specialists to consider more complex ...
We present an effective method for supervised feature construction. The main goal of the approach is...
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer S...
In many prediction problems, it is not uncommon that the number of variables used to construct a for...
This paper describes a machine learning method, called Regression by Selecting Best Feature Projecti...
Machine Learning (ML) requires a certain number of features (i.e., attributes) to train the model. O...
Feature construction can substantially improve the accuracy of Machine Learning (ML) algorithms. Gen...
Baudat and Anouar [1] propose a simple greedy algorithm for estimation of an approximate basis of th...
Feature engineering is a crucial step in the process of predictive modeling. It involves the transfo...
This paper describes a machine learning method, called Regression on Feature Projections (RFP), for ...
A key challenge in machine learning is to automatically extract relevant feature representations of ...
In many classification tasks training data have missing feature values that can be acquired at a cos...
We consider the problem of learning sparse linear models for multi-label prediction tasks under a...
The generalization error of a function approximator, feature set or smoother can be estimated direct...
We describe and analyze efficient algorithms for learning a linear predictor from examples when the ...
The increasing size of available data has led machine learning specialists to consider more complex ...