Automated processing, analysis, and generation of source code are among the key activities in software and system lifecycle. To this end, while deep learning (DL) exhibits a certain level of capability in handling these tasks, the current state-of-the-art DL models still suffer from non-robust issues and can be easily fooled by adversarial attacks.Different from adversarial attacks for image, audio, and natural languages, the structured nature of programming languages brings new challenges. In this paper, we propose a Metropolis-Hastings sampling-based identifier renaming technique, named \fullmethod (\method), which generates adversarial examples for DL models specialized for source code processing. Our in-depth evaluation on a functionali...
Deep learning plays an important role in various disciplines, such as auto-driving, information tech...
In the thesis, we explore the prospects of creating adversarial examples using various generative mo...
Maliciously manipulated inputs for attacking machine learning methods – in particular deep neural ne...
Adversarial learning has previously demonstrated effectiveness as a tool for improving performance i...
With intentional feature perturbations to a deep learning model, the adversary generates an adversar...
Machine learning models exhibit vulnerability to adversarial examples i.e., artificially created inp...
In recent years, machine learning (ML) models have been extensively used in software analytics, such...
As deep learning models have made remarkable strides in numerous fields, a variety of adversarial at...
Deep neural networks are exposed to the risk of adversarial attacks via the fast gradient sign metho...
Machine learning and deep learning in particular has been recently used to successfully address many...
As deep learning become more popular and have grown to become crucial components in the daily device...
Deep learning has improved the performance of many computer vision tasks. However, the features that...
Adversarial examples are inputs to a machine learning system intentionally crafted by an attacker to...
The monumental achievements of deep learning (DL) systems seem to guarantee the absolute superiority...
The utilisation of Deep Learning (DL) raises new challenges regarding its dependability in critical ...
Deep learning plays an important role in various disciplines, such as auto-driving, information tech...
In the thesis, we explore the prospects of creating adversarial examples using various generative mo...
Maliciously manipulated inputs for attacking machine learning methods – in particular deep neural ne...
Adversarial learning has previously demonstrated effectiveness as a tool for improving performance i...
With intentional feature perturbations to a deep learning model, the adversary generates an adversar...
Machine learning models exhibit vulnerability to adversarial examples i.e., artificially created inp...
In recent years, machine learning (ML) models have been extensively used in software analytics, such...
As deep learning models have made remarkable strides in numerous fields, a variety of adversarial at...
Deep neural networks are exposed to the risk of adversarial attacks via the fast gradient sign metho...
Machine learning and deep learning in particular has been recently used to successfully address many...
As deep learning become more popular and have grown to become crucial components in the daily device...
Deep learning has improved the performance of many computer vision tasks. However, the features that...
Adversarial examples are inputs to a machine learning system intentionally crafted by an attacker to...
The monumental achievements of deep learning (DL) systems seem to guarantee the absolute superiority...
The utilisation of Deep Learning (DL) raises new challenges regarding its dependability in critical ...
Deep learning plays an important role in various disciplines, such as auto-driving, information tech...
In the thesis, we explore the prospects of creating adversarial examples using various generative mo...
Maliciously manipulated inputs for attacking machine learning methods – in particular deep neural ne...