Unsupervised representation learning aims at describing raw data efficiently to solve various downstream tasks. It has been approached with many techniques, such as manifold learning, diffusion maps, or more recently self-supervised learning. Those techniques are arguably all based on the underlying assumption that target functions, associated with future downstream tasks, have low variations in densely populated regions of the input space. Unveiling minimal variations as a guiding principle behind unsupervised representation learning paves the way to better practical guidelines for self-supervised learning algorithms.Comment: 5 pages, 1 figure; 1 tabl
Simplicial Embeddings (SEM) are representations learned through self-supervised learning (SSL), wher...
Given an unlabeled dataset and an annotation budget, we study how to selectively label a fixed numbe...
Dimensionality reduction methods are unsupervised approaches which learn low-dimensional spaces wher...
Existing few-shot learning (FSL) methods rely on training with a large labeled dataset, which preven...
We describe a minimalistic and interpretable method for unsupervised learning, without resorting to ...
Newly developed machine learning algorithms are heavily dependent on the choice of data representati...
Self-supervised learning (SSL) has emerged as a desirable paradigm in computer vision due to the ina...
In supervised deep learning, learning good representations for remote--sensing images (RSI) relies o...
We present a self-supervised method to disentangle factors of variation in high-dimensional data tha...
By composing graphical models with deep learning architectures, we learn generative models with the ...
Self-supervised learning is a powerful paradigm for representation learning on unlabelled images. A ...
Neural networks leverage robust internal representations in order to generalise. Learning them is di...
Unsupervised representation learning (URL) that learns compact embeddings of high-dimensional data w...
This dissertation presents three contributions on unsupervised learning. First, I describe a signal ...
A majority of data processing techniques across a wide range of technical disciplines require a repr...
Simplicial Embeddings (SEM) are representations learned through self-supervised learning (SSL), wher...
Given an unlabeled dataset and an annotation budget, we study how to selectively label a fixed numbe...
Dimensionality reduction methods are unsupervised approaches which learn low-dimensional spaces wher...
Existing few-shot learning (FSL) methods rely on training with a large labeled dataset, which preven...
We describe a minimalistic and interpretable method for unsupervised learning, without resorting to ...
Newly developed machine learning algorithms are heavily dependent on the choice of data representati...
Self-supervised learning (SSL) has emerged as a desirable paradigm in computer vision due to the ina...
In supervised deep learning, learning good representations for remote--sensing images (RSI) relies o...
We present a self-supervised method to disentangle factors of variation in high-dimensional data tha...
By composing graphical models with deep learning architectures, we learn generative models with the ...
Self-supervised learning is a powerful paradigm for representation learning on unlabelled images. A ...
Neural networks leverage robust internal representations in order to generalise. Learning them is di...
Unsupervised representation learning (URL) that learns compact embeddings of high-dimensional data w...
This dissertation presents three contributions on unsupervised learning. First, I describe a signal ...
A majority of data processing techniques across a wide range of technical disciplines require a repr...
Simplicial Embeddings (SEM) are representations learned through self-supervised learning (SSL), wher...
Given an unlabeled dataset and an annotation budget, we study how to selectively label a fixed numbe...
Dimensionality reduction methods are unsupervised approaches which learn low-dimensional spaces wher...