Together with impressive advances touching every aspect of our society, AI technology based on Deep Neural Networks (DNN) is bringing increasing security concerns. While attacks operating at test time have monopolised the initial attention of researchers, backdoor attacks, exploiting the possibility of corrupting DNN models by interfering with the training process, represent a further serious threat undermining the dependability of AI techniques. In backdoor attacks, the attacker corrupts the training data to induce an erroneous behaviour at test time. Test-time errors, however, are activated only in the presence of a triggering event. In this way, the corrupted network continues to work as expected for regular inputs, and the malicious beh...
Machine learning (ML) has made tremendous progress during the past decade and is being adopted in va...
Deep neural network (DNN) has progressed rapidly during the past decade and DNN models have been dep...
Backdoor attacks mislead machine-learning models to output an attacker-specified class when presente...
Together with impressive advances touching every aspect of our society, AI technology based on Deep ...
Deep neural networks (DNNs) are widely deployed today, from image classification to voice recognitio...
Deep learning has made tremendous success in the past decade. As a result, it is becoming widely dep...
Deep Neural Networks are well known to be vulnerable to adversarial attacks and backdoor attacks, wh...
This electronic version was submitted by the student author. The certified thesis is available in th...
One major goal of the AI security community is to securely and reliably produce and deploy deep lear...
With new applications made possible by the fusion of edge computing and artificial intelligence (AI)...
Deep learning is becoming increasingly popular in real-life applications, especially in natural lang...
Backdoor attacks are rapidly emerging threats to deep neural networks (DNNs). In the backdoor attack...
Deep neural networks (DNNs), while accurate, are expensive to train. Many practitioners, therefore, ...
Nowadays, due to the huge amount of resources required for network training, pre-trained models are ...
The recent development and expansion of the field of artificial intelligence has led to a significan...
Machine learning (ML) has made tremendous progress during the past decade and is being adopted in va...
Deep neural network (DNN) has progressed rapidly during the past decade and DNN models have been dep...
Backdoor attacks mislead machine-learning models to output an attacker-specified class when presente...
Together with impressive advances touching every aspect of our society, AI technology based on Deep ...
Deep neural networks (DNNs) are widely deployed today, from image classification to voice recognitio...
Deep learning has made tremendous success in the past decade. As a result, it is becoming widely dep...
Deep Neural Networks are well known to be vulnerable to adversarial attacks and backdoor attacks, wh...
This electronic version was submitted by the student author. The certified thesis is available in th...
One major goal of the AI security community is to securely and reliably produce and deploy deep lear...
With new applications made possible by the fusion of edge computing and artificial intelligence (AI)...
Deep learning is becoming increasingly popular in real-life applications, especially in natural lang...
Backdoor attacks are rapidly emerging threats to deep neural networks (DNNs). In the backdoor attack...
Deep neural networks (DNNs), while accurate, are expensive to train. Many practitioners, therefore, ...
Nowadays, due to the huge amount of resources required for network training, pre-trained models are ...
The recent development and expansion of the field of artificial intelligence has led to a significan...
Machine learning (ML) has made tremendous progress during the past decade and is being adopted in va...
Deep neural network (DNN) has progressed rapidly during the past decade and DNN models have been dep...
Backdoor attacks mislead machine-learning models to output an attacker-specified class when presente...