We introduce a regularization loss based on kernel mean embeddings with rotation-invariant kernels on the hypersphere (also known as dot-product kernels) for self-supervised learning of image representations. Besides being fully competitive with the state of the art, our method significantly reduces time and memory complexity for self-supervised training, making it implementable for very large embedding dimensions on existing devices and more easily adjustable than previous methods to settings with limited resources. Our work follows the major paradigm where the model learns to be invariant to some predefined image transformations (cropping, blurring, color jittering, etc.), while avoiding a degenerate solution by regularizing the embedding...
Approximating non-linear kernels using fea-ture maps has gained a lot of interest in re-cent years d...
Unsupervised and self-supervised representation learning has become popular in recent years for lear...
Abstract. This paper considers kernels invariant to translation, rotation and dilation. We show that...
We introduce a regularization loss based on kernel mean embeddings with rotation-invariant kernels o...
Abstract. We propose a framework for semi-supervised learning in reproducing kernel Hilbert spaces u...
We propose a method for the approximation of high- or even infinite-dimensional feature vectors, whi...
We approach self-supervised learning of image representations from a statistical dependence perspect...
We discuss how a large class of regularization methods, collectively known as spectral regularizatio...
Kernel-based regularized learning seeks a model in a hypothesis space by mini-mizing the empirical e...
We propose a family of learning algorithms based on a new form of regularization that allows us to ...
In the interest of reproducible research, this is exactly the version of the code used for numerical...
Self-supervised learning has gained popularity in recent years due to lack of annotated datasets and...
The generalization performance of kernel methods is largely determined by the kernel, but spectral r...
Abstract. Traditional sparse representation algorithms usually operate in a single Euclidean space. ...
This paper considers kernels invariant to translation, rotation and dilation. We show that no non-tr...
Approximating non-linear kernels using fea-ture maps has gained a lot of interest in re-cent years d...
Unsupervised and self-supervised representation learning has become popular in recent years for lear...
Abstract. This paper considers kernels invariant to translation, rotation and dilation. We show that...
We introduce a regularization loss based on kernel mean embeddings with rotation-invariant kernels o...
Abstract. We propose a framework for semi-supervised learning in reproducing kernel Hilbert spaces u...
We propose a method for the approximation of high- or even infinite-dimensional feature vectors, whi...
We approach self-supervised learning of image representations from a statistical dependence perspect...
We discuss how a large class of regularization methods, collectively known as spectral regularizatio...
Kernel-based regularized learning seeks a model in a hypothesis space by mini-mizing the empirical e...
We propose a family of learning algorithms based on a new form of regularization that allows us to ...
In the interest of reproducible research, this is exactly the version of the code used for numerical...
Self-supervised learning has gained popularity in recent years due to lack of annotated datasets and...
The generalization performance of kernel methods is largely determined by the kernel, but spectral r...
Abstract. Traditional sparse representation algorithms usually operate in a single Euclidean space. ...
This paper considers kernels invariant to translation, rotation and dilation. We show that no non-tr...
Approximating non-linear kernels using fea-ture maps has gained a lot of interest in re-cent years d...
Unsupervised and self-supervised representation learning has become popular in recent years for lear...
Abstract. This paper considers kernels invariant to translation, rotation and dilation. We show that...