In this talk we will discuss the use of a Wasserstein loss function for learning regularisers in an adversarial manner. This talk is based on joint work with Sebastian Lunz and Ozan à ktem, see https://arxiv.org/abs/1805.11572Non UBCUnreviewedAuthor affiliation: University of CambridgeFacult
International audienceFine-tuning pre-trained deep networks is a practical way of benefiting from th...
Exploiting image patches instead of whole images have proved to be a powerful approach to tackle var...
We introduce Wasserstein Adversarial Proximal Policy Optimization (WAPPO), a novel algorithm for vis...
In this talk we will discuss the use of a Wasserstein loss function for learning regularisers in an ...
Noisy labels often occur in vision datasets, especially when they are obtained from crowdsourcing or...
Noisy labels often occur in vision datasets, especially when they are issued from crowdsourcing or W...
The increasingly common use of neural network classifiers in industrial and social applications of i...
We propose regularization strategies for learning discriminative models that are robust to in-class ...
In this work, we propose a framework to learn a local regularization model for solving general image...
International audienceThis paper presents a novel variational approach to impose statistical constra...
Learning to predict multi-label outputs is challenging, but in many problems there is a natural metr...
Since their invention, generative adversarial networks (GANs) have become a popular approach for lea...
Capturing visual similarity among images is the core of many computer vision and pattern recognition...
International audienceThe latent space of GANs contains rich semantics reflecting the training data....
A central problem in statistical learning is to design prediction algorithms that not only perform w...
International audienceFine-tuning pre-trained deep networks is a practical way of benefiting from th...
Exploiting image patches instead of whole images have proved to be a powerful approach to tackle var...
We introduce Wasserstein Adversarial Proximal Policy Optimization (WAPPO), a novel algorithm for vis...
In this talk we will discuss the use of a Wasserstein loss function for learning regularisers in an ...
Noisy labels often occur in vision datasets, especially when they are obtained from crowdsourcing or...
Noisy labels often occur in vision datasets, especially when they are issued from crowdsourcing or W...
The increasingly common use of neural network classifiers in industrial and social applications of i...
We propose regularization strategies for learning discriminative models that are robust to in-class ...
In this work, we propose a framework to learn a local regularization model for solving general image...
International audienceThis paper presents a novel variational approach to impose statistical constra...
Learning to predict multi-label outputs is challenging, but in many problems there is a natural metr...
Since their invention, generative adversarial networks (GANs) have become a popular approach for lea...
Capturing visual similarity among images is the core of many computer vision and pattern recognition...
International audienceThe latent space of GANs contains rich semantics reflecting the training data....
A central problem in statistical learning is to design prediction algorithms that not only perform w...
International audienceFine-tuning pre-trained deep networks is a practical way of benefiting from th...
Exploiting image patches instead of whole images have proved to be a powerful approach to tackle var...
We introduce Wasserstein Adversarial Proximal Policy Optimization (WAPPO), a novel algorithm for vis...