Graph Neural Networks (GNNs) have achieved impressive results in various graph learning tasks. They have found their way into many applications, such as fraud detection, molecular property prediction, or knowledge graph reasoning. However, GNNs have been recently demonstrated to be vulnerable to backdoor attacks. In this work, we explore a new kind of backdoor attack, i.e., a clean-label backdoor attack, on GNNs. Unlike prior backdoor attacks on GNNs in which the adversary can introduce arbitrary, often clearly mislabeled, inputs to the training set, in a clean-label backdoor attack, the resulting poisoned inputs appear to be consistent with their label and thus are less likely to be filtered as outliers. The initial experimental results il...
Graph Neural Networks (GNNs), inspired by Convolutional Neural Networks (CNNs), aggregate the messag...
Graph neural networks (GNNs) have enabled the automation of many web applications that entail node c...
The data poisoning attack has raised serious security concerns on the safety of deep neural networks...
Graph Neural Networks (GNNs) have achieved tremendous success in many graph mining tasks benefitting...
Backdoor attacks represent a serious threat to neural network models. A backdoored model will miscla...
Graph Neural Networks (GNNs) are a class of deep learning-based methods for processing graph domain ...
Graph Neural Networks (GNNs) are a class of deep learning-based methods for processing graph domain ...
Backdoor attack is a powerful attack algorithm to deep learning model. Recently, GNN's vulnerability...
Graph convolutional networks (GCNs) have been very effective in addressing the issue of various grap...
Graph Neural Networks (GNNs) have achieved promising performance in various real-world applications....
Graph data, such as chemical networks and social networks, may be deemed confidential/private becaus...
Adversarial attacks on Graph Neural Networks (GNNs) reveal their security vulnerabilities, limiting ...
Backdoor attacks threaten Deep Neural Networks (DNNs). Towards stealthiness, researchers propose cle...
Graph neural networks (GNNs) have achieved outstanding performance in semi-supervised learning tasks...
While graph neural networks (GNNs) dominate the state-of-the-art for exploring graphs in real-world ...
Graph Neural Networks (GNNs), inspired by Convolutional Neural Networks (CNNs), aggregate the messag...
Graph neural networks (GNNs) have enabled the automation of many web applications that entail node c...
The data poisoning attack has raised serious security concerns on the safety of deep neural networks...
Graph Neural Networks (GNNs) have achieved tremendous success in many graph mining tasks benefitting...
Backdoor attacks represent a serious threat to neural network models. A backdoored model will miscla...
Graph Neural Networks (GNNs) are a class of deep learning-based methods for processing graph domain ...
Graph Neural Networks (GNNs) are a class of deep learning-based methods for processing graph domain ...
Backdoor attack is a powerful attack algorithm to deep learning model. Recently, GNN's vulnerability...
Graph convolutional networks (GCNs) have been very effective in addressing the issue of various grap...
Graph Neural Networks (GNNs) have achieved promising performance in various real-world applications....
Graph data, such as chemical networks and social networks, may be deemed confidential/private becaus...
Adversarial attacks on Graph Neural Networks (GNNs) reveal their security vulnerabilities, limiting ...
Backdoor attacks threaten Deep Neural Networks (DNNs). Towards stealthiness, researchers propose cle...
Graph neural networks (GNNs) have achieved outstanding performance in semi-supervised learning tasks...
While graph neural networks (GNNs) dominate the state-of-the-art for exploring graphs in real-world ...
Graph Neural Networks (GNNs), inspired by Convolutional Neural Networks (CNNs), aggregate the messag...
Graph neural networks (GNNs) have enabled the automation of many web applications that entail node c...
The data poisoning attack has raised serious security concerns on the safety of deep neural networks...