Self-supervised visual representation learning has recently attracted significant research interest. While a common way to evaluate self-supervised representations is through transfer to various downstream tasks, we instead investigate the problem of measuring their interpretability, i.e. understanding the semantics encoded in raw representations. We formulate the latter as estimating the mutual information between the representation and a space of manually labelled concepts. To quantify this we introduce a decoding bottleneck: information must be captured by simple predictors, mapping concepts to clusters in representation space. This approach, which we call reverse linear probing, provides a single number sensitive to the semanticity of t...
Saliency methods provide post-hoc model interpretation by attributing input features to the model ou...
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Comp...
Transfer learning technique enables training Deep Learning (DL) models in a data-efficient way for s...
Self-supervised visual representation learning has recently attracted significant research interest....
Recently introduced self-supervised methods for image representation learning provide on par or supe...
Recently introduced self-supervised methods for image representation learning provide on par or supe...
Self-supervised learning (SSL) aims at extracting from abundant unlabelled images transferable seman...
Self-supervised learning methods have shown impressive results in downstream classification tasks. H...
Computers represent images with pixels and each pixel contains three numbers for red, green and blue...
The increasing impact of black box models, and particularly of unsupervised ones, comes with an incr...
The complexity of any information processing task is highly dependent on the space where data is rep...
Interpretability of representations in both deep generative and discriminative models is highly desi...
The success of recent deep convolutional neural networks (CNNs) depends on learning hidden represent...
This thesis investigates the possibility of efficiently adapting self-supervised representation lear...
The resurgence of unsupervised learning can be attributed to the remarkable progress of self-supervi...
Saliency methods provide post-hoc model interpretation by attributing input features to the model ou...
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Comp...
Transfer learning technique enables training Deep Learning (DL) models in a data-efficient way for s...
Self-supervised visual representation learning has recently attracted significant research interest....
Recently introduced self-supervised methods for image representation learning provide on par or supe...
Recently introduced self-supervised methods for image representation learning provide on par or supe...
Self-supervised learning (SSL) aims at extracting from abundant unlabelled images transferable seman...
Self-supervised learning methods have shown impressive results in downstream classification tasks. H...
Computers represent images with pixels and each pixel contains three numbers for red, green and blue...
The increasing impact of black box models, and particularly of unsupervised ones, comes with an incr...
The complexity of any information processing task is highly dependent on the space where data is rep...
Interpretability of representations in both deep generative and discriminative models is highly desi...
The success of recent deep convolutional neural networks (CNNs) depends on learning hidden represent...
This thesis investigates the possibility of efficiently adapting self-supervised representation lear...
The resurgence of unsupervised learning can be attributed to the remarkable progress of self-supervi...
Saliency methods provide post-hoc model interpretation by attributing input features to the model ou...
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Comp...
Transfer learning technique enables training Deep Learning (DL) models in a data-efficient way for s...