This work evaluates the robustness of quality measures of generative models such as Inception Score (IS) and Fr\'echet Inception Distance (FID). Analogous to the vulnerability of deep models against a variety of adversarial attacks, we show that such metrics can also be manipulated by additive pixel perturbations. Our experiments indicate that one can generate a distribution of images with very high scores but low perceptual quality. Conversely, one can optimize for small imperceptible perturbations that, when added to real world images, deteriorate their scores. We further extend our evaluation to generative models themselves, including the state of the art network StyleGANv2. We show the vulnerability of both the generative model and the ...
In safety-critical deep learning applications robustness measurement is a vital pre-deployment phase...
ImageNet pre-training has enabled state-of-the-art results on many tasks. In spite of its recognized...
Intentionally crafted adversarial samples have effectively exploited weaknesses in deep neural netwo...
This work evaluates the robustness of quality measures of generative models such as Inception Score ...
Evaluating image generation models such as generative adversarial networks (GANs) is a challenging p...
International audienceGenerative adversarial networks (GANs) are one of the most popular methods for...
Deep neural networks for computer vision are deployed in increasingly safety-critical and socially-i...
Generative adversarial networks (GANs) are a class of generative models, for which the goal is to le...
Several existing works study either adversarial or natural distributional robustness of deep neural ...
© 2019 Sukarna BaruaGenerative Adversarial Networks (GANs) are a powerful class of generative models...
We develop a measure for evaluating the performance of generative networks given two sets of images....
Devising indicative evaluation metrics for the image generation task remains an open problem. The mo...
Recent advances in diffusion models have led to a quantum leap in the quality of generative visual c...
Since their conception in 2014, a large number of Generative Adversarial Networks (GANs) [2] has bee...
Despite the success of convolutional neural networks (CNNs) in many academic benchmarks for computer...
In safety-critical deep learning applications robustness measurement is a vital pre-deployment phase...
ImageNet pre-training has enabled state-of-the-art results on many tasks. In spite of its recognized...
Intentionally crafted adversarial samples have effectively exploited weaknesses in deep neural netwo...
This work evaluates the robustness of quality measures of generative models such as Inception Score ...
Evaluating image generation models such as generative adversarial networks (GANs) is a challenging p...
International audienceGenerative adversarial networks (GANs) are one of the most popular methods for...
Deep neural networks for computer vision are deployed in increasingly safety-critical and socially-i...
Generative adversarial networks (GANs) are a class of generative models, for which the goal is to le...
Several existing works study either adversarial or natural distributional robustness of deep neural ...
© 2019 Sukarna BaruaGenerative Adversarial Networks (GANs) are a powerful class of generative models...
We develop a measure for evaluating the performance of generative networks given two sets of images....
Devising indicative evaluation metrics for the image generation task remains an open problem. The mo...
Recent advances in diffusion models have led to a quantum leap in the quality of generative visual c...
Since their conception in 2014, a large number of Generative Adversarial Networks (GANs) [2] has bee...
Despite the success of convolutional neural networks (CNNs) in many academic benchmarks for computer...
In safety-critical deep learning applications robustness measurement is a vital pre-deployment phase...
ImageNet pre-training has enabled state-of-the-art results on many tasks. In spite of its recognized...
Intentionally crafted adversarial samples have effectively exploited weaknesses in deep neural netwo...