Deep neural networks (DNNs) have been widely applied to various applications including image classification, text generation, audio recognition, and graph data analysis. However, recent studies have shown that DNNs are vulnerable to adversarial attacks. Though there are several works studying adversarial attack and defense strategies on domains such as images and natural language processing, it is still difficult to directly transfer the learned knowledge to graph structure data due to its representation challenges. Given the importance of graph analysis, an increasing number of works start to analyze the robustness of machine learning models on graph data. Nevertheless, current studies considering adversarial behaviors on graph data usuall...
In the past few years, evaluating on adversarial examples has become a standard procedure to meas...
Graph neural networks (GNNs) have enabled the automation of many web applications that entail node c...
In recent years, it has been seen that deep neural networks are lacking robustness and are vulnerabl...
Graph neural networks (GNNs) have achieved tremendous success in the task of graph classification an...
Recent years have witnessed the deployment of adversarial attacks to evaluate the robustness of Neur...
DeepNeuralNetworks (DNNs) are powerful to the classification tasks, finding the potential links bet...
Adversarial attacks on Graph Neural Networks (GNNs) reveal their security vulnerabilities, limiting ...
Graph Neural Networks (GNNs) have been widely applied to different tasks such as bioinformatics, dru...
A cursory reading of the literature suggests that we have made a lot of progress in designing effect...
In image classification of deep learning, adversarial examples where input is intended to add small ...
With the rapid development of neural network technologies in machine learning, neural networks are w...
Albeit displaying remarkable performance across a range of tasks, Deep Neural Networks (DNNs) are hi...
Although Deep Neural Networks (DNNs) have achieved great success on various applications, investigat...
Although Deep Neural Networks (DNNs) have achieved great success on various applications, investigat...
Despite the popularity and success of deep learning architectures in recent years, they have shown t...
In the past few years, evaluating on adversarial examples has become a standard procedure to meas...
Graph neural networks (GNNs) have enabled the automation of many web applications that entail node c...
In recent years, it has been seen that deep neural networks are lacking robustness and are vulnerabl...
Graph neural networks (GNNs) have achieved tremendous success in the task of graph classification an...
Recent years have witnessed the deployment of adversarial attacks to evaluate the robustness of Neur...
DeepNeuralNetworks (DNNs) are powerful to the classification tasks, finding the potential links bet...
Adversarial attacks on Graph Neural Networks (GNNs) reveal their security vulnerabilities, limiting ...
Graph Neural Networks (GNNs) have been widely applied to different tasks such as bioinformatics, dru...
A cursory reading of the literature suggests that we have made a lot of progress in designing effect...
In image classification of deep learning, adversarial examples where input is intended to add small ...
With the rapid development of neural network technologies in machine learning, neural networks are w...
Albeit displaying remarkable performance across a range of tasks, Deep Neural Networks (DNNs) are hi...
Although Deep Neural Networks (DNNs) have achieved great success on various applications, investigat...
Although Deep Neural Networks (DNNs) have achieved great success on various applications, investigat...
Despite the popularity and success of deep learning architectures in recent years, they have shown t...
In the past few years, evaluating on adversarial examples has become a standard procedure to meas...
Graph neural networks (GNNs) have enabled the automation of many web applications that entail node c...
In recent years, it has been seen that deep neural networks are lacking robustness and are vulnerabl...