Graph Neural Networks (GNNs) have emerged as a series of effective learning methods for graph-related tasks. However, GNNs are shown vulnerable to adversarial attacks, where attackers can fool GNNs into making wrong predictions on adversarial samples with well-designed perturbations. Specifically, we observe that the current evasion attacks suffer from two limitations: (1) the attack strategy based on the reinforcement learning method might not be transferable when the attack budget changes; (2) the greedy mechanism in the vanilla gradient-based method ignores the long-term benefits of each perturbation operation. In this paper, we propose a new attack method named projective ranking to overcome the above limitations. Our idea is to learn a...
Graph neural networks (GNNs) have achieved state-of-the-art performance in many graph-based tasks su...
With the rapid development of neural network technologies in machine learning, neural networks are w...
We propose new, more efficient targeted white-box attacks against deep neural networks. Our attacks ...
Graph neural networks (GNNs) offer promising learning methods for graph-related tasks. However, GNNs...
Recent years have witnessed the deployment of adversarial attacks to evaluate the robustness of Neur...
Network security analysis based on attack graphs has been applied extensively in recent years. The r...
Adversarial attacks on Graph Neural Networks (GNNs) reveal their security vulnerabilities, limiting ...
Graph Neural Networks (GNNs) have achieved tremendous success in many graph mining tasks benefitting...
Deep neural networks (DNNs) have been widely applied to various applications including image classif...
Graph neural networks (GNNs) have achieved tremendous success in the task of graph classification an...
Graph neural networks (GNNs) have enabled the automation of many web applications that entail node c...
A cursory reading of the literature suggests that we have made a lot of progress in designing effect...
Graph data has been widely used to represent data from various domain, e.g., social networks, recomm...
Graph data, such as chemical networks and social networks, may be deemed confidential/private becaus...
Graph neural networks (GNN) based collaborative filtering (CF) has attracted increasing attention in...
Graph neural networks (GNNs) have achieved state-of-the-art performance in many graph-based tasks su...
With the rapid development of neural network technologies in machine learning, neural networks are w...
We propose new, more efficient targeted white-box attacks against deep neural networks. Our attacks ...
Graph neural networks (GNNs) offer promising learning methods for graph-related tasks. However, GNNs...
Recent years have witnessed the deployment of adversarial attacks to evaluate the robustness of Neur...
Network security analysis based on attack graphs has been applied extensively in recent years. The r...
Adversarial attacks on Graph Neural Networks (GNNs) reveal their security vulnerabilities, limiting ...
Graph Neural Networks (GNNs) have achieved tremendous success in many graph mining tasks benefitting...
Deep neural networks (DNNs) have been widely applied to various applications including image classif...
Graph neural networks (GNNs) have achieved tremendous success in the task of graph classification an...
Graph neural networks (GNNs) have enabled the automation of many web applications that entail node c...
A cursory reading of the literature suggests that we have made a lot of progress in designing effect...
Graph data has been widely used to represent data from various domain, e.g., social networks, recomm...
Graph data, such as chemical networks and social networks, may be deemed confidential/private becaus...
Graph neural networks (GNN) based collaborative filtering (CF) has attracted increasing attention in...
Graph neural networks (GNNs) have achieved state-of-the-art performance in many graph-based tasks su...
With the rapid development of neural network technologies in machine learning, neural networks are w...
We propose new, more efficient targeted white-box attacks against deep neural networks. Our attacks ...