Graph data has been widely used to represent data from various domain, e.g., social networks, recommendation system. With great power, the GNN models, usually as valuable properties of their owners, also become attractive targets of the adversary who covets to steal them. While existing works show that simple deep neural networks can be reproduced by so-called Model Extraction Attacks, how to extract a GNN model has not been explored. In this paper, we exploit the threat of model extraction attacks against GNN models. Unlike ordinary attacks which obtain model information via only the input-output query pairs, we utilize both the node queries and the graph structure to extract the GNNs. Furthermore, we consider the stealthiness of the attac...
With the rapid development of neural network technologies in machine learning, neural networks are w...
Model-based attacks can infer training data information from deep neural network models. These attac...
Privacy and interpretability are two important ingredients for achieving trustworthy machine learnin...
International audienceMany real-world data come in the form of graphs. Graph neural networks (GNNs),...
Graph data, such as chemical networks and social networks, may be deemed confidential/private becaus...
Adversarial attacks on Graph Neural Networks (GNNs) reveal their security vulnerabilities, limiting ...
Graph Neural Networks (GNNs), inspired by Convolutional Neural Networks (CNNs), aggregate the messag...
Recent years have witnessed the deployment of adversarial attacks to evaluate the robustness of Neur...
Graph Neural Networks (GNNs) have achieved tremendous success in many graph mining tasks benefitting...
Graph Neural Networks (GNNs), a generalization of neural networks to graph-structured data, are ofte...
Graph neural networks (GNNs) have achieved state-of-the-art performance in many graph-based tasks su...
Machine learning models based on Deep Neural Networks (DNN) have gained popularity due to their pr...
Graph neural networks (GNNs) have enabled the automation of many web applications that entail node c...
High-performance Deep Neural Networks (DNNs) are increasingly deployed in many real-world applicatio...
High-performance Deep Neural Networks (DNNs) are increasingly deployed in many real-world applicatio...
With the rapid development of neural network technologies in machine learning, neural networks are w...
Model-based attacks can infer training data information from deep neural network models. These attac...
Privacy and interpretability are two important ingredients for achieving trustworthy machine learnin...
International audienceMany real-world data come in the form of graphs. Graph neural networks (GNNs),...
Graph data, such as chemical networks and social networks, may be deemed confidential/private becaus...
Adversarial attacks on Graph Neural Networks (GNNs) reveal their security vulnerabilities, limiting ...
Graph Neural Networks (GNNs), inspired by Convolutional Neural Networks (CNNs), aggregate the messag...
Recent years have witnessed the deployment of adversarial attacks to evaluate the robustness of Neur...
Graph Neural Networks (GNNs) have achieved tremendous success in many graph mining tasks benefitting...
Graph Neural Networks (GNNs), a generalization of neural networks to graph-structured data, are ofte...
Graph neural networks (GNNs) have achieved state-of-the-art performance in many graph-based tasks su...
Machine learning models based on Deep Neural Networks (DNN) have gained popularity due to their pr...
Graph neural networks (GNNs) have enabled the automation of many web applications that entail node c...
High-performance Deep Neural Networks (DNNs) are increasingly deployed in many real-world applicatio...
High-performance Deep Neural Networks (DNNs) are increasingly deployed in many real-world applicatio...
With the rapid development of neural network technologies in machine learning, neural networks are w...
Model-based attacks can infer training data information from deep neural network models. These attac...
Privacy and interpretability are two important ingredients for achieving trustworthy machine learnin...