Recent advances in machine learning (ML) and deep learning in particular, enabled by hardware advances and big data, have provided impressive results across a wide range of computational problems such as computer vision, natural language, or reinforcement learning. Many of these improvements are however constrained to problems with large-scale curated data-sets which require a lot of human labor to gather. Additionally, these models tend to generalize poorly under both slight distributional shifts and low-data regimes. In recent years, emerging fields such as meta-learning or self-supervised learning have been closing the gap between proof-of-concept results and real-life applications of ML. We follow this line of work and contribute a n...
Image translation between two domains is a class of problems aiming to learn mapping from an input i...
We propose to use pretraining to boost general image-to-image translation. Prior image-to-image tran...
In recent years, there has been rapid progress in computing performance and communication techniques...
In order to make predictions with high accuracy, conventional deep learning systems require large tr...
Meta learning approaches to few-shot classification are computationally efficient at test time requi...
The performance of deep learning is heavily influenced by the size of the learning samples, whose la...
While deep learning approaches have led to breakthroughs in many medical image analysis tasks, the a...
In this work, metric-based meta-learning models are proposed to learn a generic model embedding that...
Understanding how humans and machines recognize novel visual concepts from few examples remains a fu...
Humans show a remarkable capability to accurately solve a wide range of problems efficiently -- util...
International audienceWe present the design and baseline results for a new challenge in the ChaLearn...
In this paper, we consider the framework of multi-task representation (MTR) learning where the goal ...
Proceedings, Part XXInternational audienceIn this paper, we consider the framework of multi-task rep...
A natural progression in machine learning research is to automate and learn from data increasingly m...
Training a model with limited data is an essential task for machine learning and visual recognition....
Image translation between two domains is a class of problems aiming to learn mapping from an input i...
We propose to use pretraining to boost general image-to-image translation. Prior image-to-image tran...
In recent years, there has been rapid progress in computing performance and communication techniques...
In order to make predictions with high accuracy, conventional deep learning systems require large tr...
Meta learning approaches to few-shot classification are computationally efficient at test time requi...
The performance of deep learning is heavily influenced by the size of the learning samples, whose la...
While deep learning approaches have led to breakthroughs in many medical image analysis tasks, the a...
In this work, metric-based meta-learning models are proposed to learn a generic model embedding that...
Understanding how humans and machines recognize novel visual concepts from few examples remains a fu...
Humans show a remarkable capability to accurately solve a wide range of problems efficiently -- util...
International audienceWe present the design and baseline results for a new challenge in the ChaLearn...
In this paper, we consider the framework of multi-task representation (MTR) learning where the goal ...
Proceedings, Part XXInternational audienceIn this paper, we consider the framework of multi-task rep...
A natural progression in machine learning research is to automate and learn from data increasingly m...
Training a model with limited data is an essential task for machine learning and visual recognition....
Image translation between two domains is a class of problems aiming to learn mapping from an input i...
We propose to use pretraining to boost general image-to-image translation. Prior image-to-image tran...
In recent years, there has been rapid progress in computing performance and communication techniques...