Problems in machine learning (ML) can involve noisy input data, and ML classification methods have reached limiting accuracies when based on standard ML data sets consisting of feature vectors and their classes. Greater accuracy will require incorporation of prior structural information on data into learning. We study methods to regularize feature vectors (unsupervised regularization methods), analogous to supervised regularization for estimating functions in ML. We study regularization (denoising) of ML feature vectors using Tikhonov and other regularization methods for functions on ${\bf R}^n$. A feature vector ${\bf x}=(x_1,\ldots,x_n)=\{x_q\}_{q=1}^n$ is viewed as a function of its index $q$, and smoothed using prior information on its ...
Regularization techniques have become a principled tool for model-based statistics and artificial in...
In system identification, the Akaike Information Criterion (AIC) is a well known method to balance t...
International audienceIn many applications where collecting data is expensive , for example neurosci...
Regularization Networks and Support Vector Machines are techniques for solv-ing certain problems of ...
Over recent years, data-intensive science has been playing an increasingly essential role in biologi...
In this thesis, we present Regularized Learning with Feature Networks (RLFN), an approach for regula...
Modern data-sets are often huge, possibly high-dimensional, and require complex non-linear parameter...
We propose a method for the approximation of high- or even infinite-dimensional feature vectors, whi...
We propose a novel algorithm, Regularized Slope Function Networks (RSFN), for classification and fea...
We develop a graph structure for feature vectors in machine learning, which we denote as a feature n...
Learning according to the structural risk minimization principle can be naturally expressed as an Iv...
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, 2002.Includes bi...
We start by demonstrating that an elementary learning task—learning a linear filter from training da...
A classical algorithm in classification is the support vector machine (SVM) algorithm. Based on Vapn...
In this work we study performances of different machine learning models by focusing on regularizatio...
Regularization techniques have become a principled tool for model-based statistics and artificial in...
In system identification, the Akaike Information Criterion (AIC) is a well known method to balance t...
International audienceIn many applications where collecting data is expensive , for example neurosci...
Regularization Networks and Support Vector Machines are techniques for solv-ing certain problems of ...
Over recent years, data-intensive science has been playing an increasingly essential role in biologi...
In this thesis, we present Regularized Learning with Feature Networks (RLFN), an approach for regula...
Modern data-sets are often huge, possibly high-dimensional, and require complex non-linear parameter...
We propose a method for the approximation of high- or even infinite-dimensional feature vectors, whi...
We propose a novel algorithm, Regularized Slope Function Networks (RSFN), for classification and fea...
We develop a graph structure for feature vectors in machine learning, which we denote as a feature n...
Learning according to the structural risk minimization principle can be naturally expressed as an Iv...
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, 2002.Includes bi...
We start by demonstrating that an elementary learning task—learning a linear filter from training da...
A classical algorithm in classification is the support vector machine (SVM) algorithm. Based on Vapn...
In this work we study performances of different machine learning models by focusing on regularizatio...
Regularization techniques have become a principled tool for model-based statistics and artificial in...
In system identification, the Akaike Information Criterion (AIC) is a well known method to balance t...
International audienceIn many applications where collecting data is expensive , for example neurosci...