Adversarial attacks are considered a potentially serious security threat for machine learning systems. Medical image analysis (MedIA) systems have recently been argued to be vulnerable to adversarial attacks due to strong financial incentives and the associated technological infrastructure. In this paper, we study previously unexplored factors affecting adversarial attack vulnerability of deep learning MedIA systems in three medical domains: ophthalmology, radiology, and pathology. We focus on adversarial black-box settings, in which the attacker does not have full access to the target model and usually uses another model, commonly referred to as surrogate model, to craft adversarial examples that are then transferred to the target model. W...
Nowadays, in the health area, Artificial Intelligence (AI) becomes a must-have to improve diagnosis ...
Data-driven deep learning tasks for security related applications are gaining increasing popularity ...
Machine learning models exhibit vulnerability to adversarial examples i.e., artificially created inp...
Adversarial attacks are considered a potentially serious security threat for machine learning system...
Adversarial attacks are considered a potentially serious security threat for machine learning system...
Adversarial attacks are considered a potentially serious security threat for machine learning system...
This repository contains trained models, training-validation-test splits, and other data used in exp...
Adversarial attacks are considered a potentially serious security threat for machine learning system...
In the past years, deep neural networks (DNN) have become popular in many disciplines such as comput...
This paper addresses the problem of dependence of the success rate of adversarial attacks to the dee...
Transfer learning from natural images is used in deep neural networks (DNNs) for medical image class...
Telemedicine applications have been recently evolved to allow patients in underdeveloped areas to re...
Failure cases of black-box deep learning, e.g. adversarial examples, might have severe consequences ...
In the past years, Deep Neural Networks (DNNs) have become popular in many disciplines such as Compu...
Recent studies have shown that Convolutional Neural Networks (CNN) are relatively easy to attack thr...
Nowadays, in the health area, Artificial Intelligence (AI) becomes a must-have to improve diagnosis ...
Data-driven deep learning tasks for security related applications are gaining increasing popularity ...
Machine learning models exhibit vulnerability to adversarial examples i.e., artificially created inp...
Adversarial attacks are considered a potentially serious security threat for machine learning system...
Adversarial attacks are considered a potentially serious security threat for machine learning system...
Adversarial attacks are considered a potentially serious security threat for machine learning system...
This repository contains trained models, training-validation-test splits, and other data used in exp...
Adversarial attacks are considered a potentially serious security threat for machine learning system...
In the past years, deep neural networks (DNN) have become popular in many disciplines such as comput...
This paper addresses the problem of dependence of the success rate of adversarial attacks to the dee...
Transfer learning from natural images is used in deep neural networks (DNNs) for medical image class...
Telemedicine applications have been recently evolved to allow patients in underdeveloped areas to re...
Failure cases of black-box deep learning, e.g. adversarial examples, might have severe consequences ...
In the past years, Deep Neural Networks (DNNs) have become popular in many disciplines such as Compu...
Recent studies have shown that Convolutional Neural Networks (CNN) are relatively easy to attack thr...
Nowadays, in the health area, Artificial Intelligence (AI) becomes a must-have to improve diagnosis ...
Data-driven deep learning tasks for security related applications are gaining increasing popularity ...
Machine learning models exhibit vulnerability to adversarial examples i.e., artificially created inp...