Recent research on reinforcement learning (RL) has suggested that trained agents are vulnerable to maliciously crafted adversarial samples. In this work, we show how such samples can be generalised from White-box and Grey-box attacks to a strong Black-box case, where the attacker has no knowledge of the agents, their training parameters and their training methods. We use sequence-to-sequence models to predict a single action or a sequence of future actions that a trained agent will make. First, we show our approximation model, based on time-series information from the agent, consistently predicts RL agents' future actions with high accuracy in a Black-box setup on a wide range of games and RL algorithms. Second, we find that although advers...
While significant research advances have been made in the field of deep reinforcement learning, ther...
Doctor of PhilosophyDepartment of Computer ScienceArslan MunirWilliam H. HsuSince the inception of D...
Adversarial attacks pose significant challenges for detecting adversarial attacks at an early stage....
Adversarial examples can be useful for identifying vulnerabilities in AI systems before they are dep...
We study black-box reward poisoning attacks against reinforcement learning (RL), in which an adversa...
Black-box attacks in deep reinforcement learning usually retrain substitute policies to mimic behavi...
Robustness of Deep Reinforcement Learning (DRL) algorithms towards adversarial attacks in real world...
Deep Learning methods are known to be vulnerable to adversarial attacks. Since Deep Reinforcement Le...
Adversarial attacks in reinforcement learning (RL) often assume highly-privileged access to the vict...
Adversarial attacks against conventional Deep Learning (DL) systems and algorithms have been widely ...
The vulnerability of the high-performance machine learning models implies a security risk in applica...
In offline multi-agent reinforcement learning (MARL), agents estimate policies from a given dataset....
Modern commercial antivirus systems increasingly rely on machine learning to keep up with the rampan...
In this paper, we are going to verify the possibility to create a ransomware simulation that will us...
Deep reinforcement learning models are vulnerable to adversarial attacks that can decrease a victim'...
While significant research advances have been made in the field of deep reinforcement learning, ther...
Doctor of PhilosophyDepartment of Computer ScienceArslan MunirWilliam H. HsuSince the inception of D...
Adversarial attacks pose significant challenges for detecting adversarial attacks at an early stage....
Adversarial examples can be useful for identifying vulnerabilities in AI systems before they are dep...
We study black-box reward poisoning attacks against reinforcement learning (RL), in which an adversa...
Black-box attacks in deep reinforcement learning usually retrain substitute policies to mimic behavi...
Robustness of Deep Reinforcement Learning (DRL) algorithms towards adversarial attacks in real world...
Deep Learning methods are known to be vulnerable to adversarial attacks. Since Deep Reinforcement Le...
Adversarial attacks in reinforcement learning (RL) often assume highly-privileged access to the vict...
Adversarial attacks against conventional Deep Learning (DL) systems and algorithms have been widely ...
The vulnerability of the high-performance machine learning models implies a security risk in applica...
In offline multi-agent reinforcement learning (MARL), agents estimate policies from a given dataset....
Modern commercial antivirus systems increasingly rely on machine learning to keep up with the rampan...
In this paper, we are going to verify the possibility to create a ransomware simulation that will us...
Deep reinforcement learning models are vulnerable to adversarial attacks that can decrease a victim'...
While significant research advances have been made in the field of deep reinforcement learning, ther...
Doctor of PhilosophyDepartment of Computer ScienceArslan MunirWilliam H. HsuSince the inception of D...
Adversarial attacks pose significant challenges for detecting adversarial attacks at an early stage....