In self-supervised learning, one trains a model to solve a so-called pretext task on a dataset without the need for human annotation. The main objective, however, is to transfer this model to a target domain and task. Currently, the most effective transfer strategy is fine-tuning, which restricts one to use the same model or parts thereof for both pretext and target tasks. In this paper, we present a novel framework for self-supervised learning that overcomes limitations in designing and comparing different tasks, models, and data domains. In particular, our framework decouples the structure of the self-supervised model from the final task-specific fine-tuned model. This allows us to: 1) quantitatively assess previously incompatible models ...
The recent rise in machine learning has been largely made possible by novel algorithms, such as con...
This thesis investigates the possibility of efficiently adapting self-supervised representation lear...
We propose self-adaptive training -- a unified training algorithm that dynamically calibrates and en...
The complexity of any information processing task is highly dependent on the space where data is rep...
Self-supervised representation learning methods aim to provide powerful deep feature learning withou...
Self-supervised learning (SSL) aims at extracting from abundant unlabelled images transferable seman...
Computational models of learning typically train on labeled input patterns (supervised learning), un...
Although deep learning algorithms have achieved significant progress in a variety of domains, they r...
In general, large-scale annotated data are essential to training deep neural networks in order to ac...
This work investigates the unexplored usability of self-supervised representation learning in the di...
This work investigates the unexplored usability of self-supervised representation learning in the di...
Recent advancements in self-supervised learning (SSL) made it possible to learn generalizable visual...
We investigate methods for combining multiple self-supervised tasks-i.e., supervised tasks where dat...
We investigate methods for combining multiple self-supervised tasks-i.e., supervised tasks where dat...
Incremental learning requires a learning model to learn new tasks without forgetting the learned tas...
The recent rise in machine learning has been largely made possible by novel algorithms, such as con...
This thesis investigates the possibility of efficiently adapting self-supervised representation lear...
We propose self-adaptive training -- a unified training algorithm that dynamically calibrates and en...
The complexity of any information processing task is highly dependent on the space where data is rep...
Self-supervised representation learning methods aim to provide powerful deep feature learning withou...
Self-supervised learning (SSL) aims at extracting from abundant unlabelled images transferable seman...
Computational models of learning typically train on labeled input patterns (supervised learning), un...
Although deep learning algorithms have achieved significant progress in a variety of domains, they r...
In general, large-scale annotated data are essential to training deep neural networks in order to ac...
This work investigates the unexplored usability of self-supervised representation learning in the di...
This work investigates the unexplored usability of self-supervised representation learning in the di...
Recent advancements in self-supervised learning (SSL) made it possible to learn generalizable visual...
We investigate methods for combining multiple self-supervised tasks-i.e., supervised tasks where dat...
We investigate methods for combining multiple self-supervised tasks-i.e., supervised tasks where dat...
Incremental learning requires a learning model to learn new tasks without forgetting the learned tas...
The recent rise in machine learning has been largely made possible by novel algorithms, such as con...
This thesis investigates the possibility of efficiently adapting self-supervised representation lear...
We propose self-adaptive training -- a unified training algorithm that dynamically calibrates and en...