A method is introduced to learn and represent similarity with lin-ear operators in kernel induced Hilbert spaces. Transferring well estab-lished error bounds for vector valued large-margin classi\u85ers to the set-ting of Hilbert-Schmidt operators leads to dimension free bounds on a risk functional for linear representations and motivates a regularized ob-jective functional. Minimization of this objective is e¤ected by a simple technique of stochastic gradient descent. The resulting representations are tested on transfer problems in image processing, involving plane and spatial geometric invariants, handwritten characters and face recognition.
We propose a method to learn simultaneously a vector-valued function and a kernel between its compo...
Kernel functions have become an extremely popular tool in machine learning, with an attractive theor...
To improve the performance of the subspace classifier, it is effective to reduce the dimensionality ...
We continue the investigation of natural conditions for a similarity function to allow learning, wit...
Despite the success of the popular kernelized support vector machines, they have two major limitatio...
International audienceTraditional supervised classification algorithms fail when unlabeled test data...
Abstract. Similarity functions are widely used in many machine learn-ing or pattern recognition task...
Subspace-based learning problems involve data whose elements are linear sub-spaces of a vector space...
We propose a method for the approximation of high- or even infinite-dimensional feature vectors, whi...
Abstract. We propose a framework for semi-supervised learning in reproducing kernel Hilbert spaces u...
Abstract. Recently, Balcan and Blum [1] suggested a theory of learning based on general similarity f...
Kernel-based regularized learning seeks a model in a hypothesis space by mini-mizing the empirical e...
Kernel functions have become an extremely popular tool in machine learning, with many applica-tions ...
Incorporating invariance information is important for many learning problems. To exploit invariances...
We propose a method to learn simultaneously a vector-valued function and a kernel between its compon...
We propose a method to learn simultaneously a vector-valued function and a kernel between its compo...
Kernel functions have become an extremely popular tool in machine learning, with an attractive theor...
To improve the performance of the subspace classifier, it is effective to reduce the dimensionality ...
We continue the investigation of natural conditions for a similarity function to allow learning, wit...
Despite the success of the popular kernelized support vector machines, they have two major limitatio...
International audienceTraditional supervised classification algorithms fail when unlabeled test data...
Abstract. Similarity functions are widely used in many machine learn-ing or pattern recognition task...
Subspace-based learning problems involve data whose elements are linear sub-spaces of a vector space...
We propose a method for the approximation of high- or even infinite-dimensional feature vectors, whi...
Abstract. We propose a framework for semi-supervised learning in reproducing kernel Hilbert spaces u...
Abstract. Recently, Balcan and Blum [1] suggested a theory of learning based on general similarity f...
Kernel-based regularized learning seeks a model in a hypothesis space by mini-mizing the empirical e...
Kernel functions have become an extremely popular tool in machine learning, with many applica-tions ...
Incorporating invariance information is important for many learning problems. To exploit invariances...
We propose a method to learn simultaneously a vector-valued function and a kernel between its compon...
We propose a method to learn simultaneously a vector-valued function and a kernel between its compo...
Kernel functions have become an extremely popular tool in machine learning, with an attractive theor...
To improve the performance of the subspace classifier, it is effective to reduce the dimensionality ...