Minimum redundancy among different elements of an embedding in a latent space is a fundamental requirement or major preference in representation learning to capture intrinsic informational structures. Current self-supervised learning methods minimize a pair-wise covariance matrix to reduce the feature redundancy and produce promising results. However, such representation features of multiple variables may contain the redundancy among more than two feature variables that cannot be minimized via the pairwise regularization. Here we propose the High-Order Mixed-Moment-based Embedding (HOME) strategy to reduce the redundancy between any sets of feature variables, which is to our best knowledge the first attempt to utilize high-order statistics/...
In contrastive self-supervised learning, the common way to learn discriminative representation is to...
In contrastive self-supervised learning, positive samples are typically drawn from the same image bu...
Feature selection by maximizing high-order mutual information between the selected feature vector an...
Self-supervised learning allows AI systems to learn effective representations from large amounts of ...
International audienceRecent self-supervised methods for image representation learning maximize thea...
Self-supervised representation learning often uses data augmentations to induce some invariance to "...
While self-supervised learning techniques are often used to mining implicit knowledge from unlabeled...
We propose a mutual information-based sufficient representation learning (MSRL) approach, which uses...
In this paper, we provide a comprehensive toolbox for understanding and enhancing self-supervised le...
Contrastive self-supervised representation learning methods maximize the similarity between the posi...
We present a self-supervised method to disentangle factors of variation in high-dimensional data tha...
Self-supervised representation learning methods aim to provide powerful deep feature learning withou...
Self-supervised learning on large-scale multi-modal datasets allows learning semantically meaningful...
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Comput...
Self-supervised visual representation methods are closing the gap with supervised learning performan...
In contrastive self-supervised learning, the common way to learn discriminative representation is to...
In contrastive self-supervised learning, positive samples are typically drawn from the same image bu...
Feature selection by maximizing high-order mutual information between the selected feature vector an...
Self-supervised learning allows AI systems to learn effective representations from large amounts of ...
International audienceRecent self-supervised methods for image representation learning maximize thea...
Self-supervised representation learning often uses data augmentations to induce some invariance to "...
While self-supervised learning techniques are often used to mining implicit knowledge from unlabeled...
We propose a mutual information-based sufficient representation learning (MSRL) approach, which uses...
In this paper, we provide a comprehensive toolbox for understanding and enhancing self-supervised le...
Contrastive self-supervised representation learning methods maximize the similarity between the posi...
We present a self-supervised method to disentangle factors of variation in high-dimensional data tha...
Self-supervised representation learning methods aim to provide powerful deep feature learning withou...
Self-supervised learning on large-scale multi-modal datasets allows learning semantically meaningful...
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Comput...
Self-supervised visual representation methods are closing the gap with supervised learning performan...
In contrastive self-supervised learning, the common way to learn discriminative representation is to...
In contrastive self-supervised learning, positive samples are typically drawn from the same image bu...
Feature selection by maximizing high-order mutual information between the selected feature vector an...