Learning invariant representations is a critical first step in a number of machine learning tasks. A common approach is given by the so-called information bottleneck principle in which an application dependent function of mutual information is carefully chosen and optimized. Unfortunately, in practice, these functions are not suitable for optimization purposes since these losses are agnostic of the metric structure of the parameters of the model. In our paper, we introduce a class of losses for learning representations that are invariant to some extraneous variable of interest by inverting the class of contrastive losses, i.e., inverse contrastive loss (ICL). We show that if the extraneous variable is binary, then optimizing ICL is equivale...
Many works have shown that strong connections relate learning from examples to regularization techni...
Is it possible to train several classifiers to perform meaningful crowd-sourcing to produce a better...
Incorporating invariances into a learning algorithm is a common problem in machine learning. We prov...
Cross entropy loss has served as the main objective function for classification-based tasks. Widely ...
Machine Learning grew exponentially in the last decade and it is for sure a central topic in every s...
Designing learning systems which are invariant to certain data transformations is critical in machin...
Regularization addresses the ill-posedness of the training problem in machine learning or the recons...
Contrastive learning is a representation learning method performed by contrasting a sample to other ...
International audienceCurrent contrastive learning methods use random transformations sampled from a...
As a seminal tool in self-supervised representation learning, contrastive learning has gained unprec...
International audienceMany datasets are biased, namely they contain easy-to-learn features that are ...
Learning models that are robust to distribution shifts is a key concern in the context of their real...
International audienceA grand challenge in representation learning is the development of computation...
Recent works in self-supervised learning have advanced the state-of-the-art by relying on the contra...
International audienceBuilding upon recent advances in entropy-regularized optimal transport, and up...
Many works have shown that strong connections relate learning from examples to regularization techni...
Is it possible to train several classifiers to perform meaningful crowd-sourcing to produce a better...
Incorporating invariances into a learning algorithm is a common problem in machine learning. We prov...
Cross entropy loss has served as the main objective function for classification-based tasks. Widely ...
Machine Learning grew exponentially in the last decade and it is for sure a central topic in every s...
Designing learning systems which are invariant to certain data transformations is critical in machin...
Regularization addresses the ill-posedness of the training problem in machine learning or the recons...
Contrastive learning is a representation learning method performed by contrasting a sample to other ...
International audienceCurrent contrastive learning methods use random transformations sampled from a...
As a seminal tool in self-supervised representation learning, contrastive learning has gained unprec...
International audienceMany datasets are biased, namely they contain easy-to-learn features that are ...
Learning models that are robust to distribution shifts is a key concern in the context of their real...
International audienceA grand challenge in representation learning is the development of computation...
Recent works in self-supervised learning have advanced the state-of-the-art by relying on the contra...
International audienceBuilding upon recent advances in entropy-regularized optimal transport, and up...
Many works have shown that strong connections relate learning from examples to regularization techni...
Is it possible to train several classifiers to perform meaningful crowd-sourcing to produce a better...
Incorporating invariances into a learning algorithm is a common problem in machine learning. We prov...