Hierarchical text classification consists of classifying text documents into a hierarchy of classes and sub-classes. Although Artificial Neural Networks have proved useful to perform this task, unfortunately, they can leak training data information to adversaries due to training data memorization. Using differential privacy during model training can mitigate leakage attacks against trained models, enabling the models to be shared safely at the cost of reduced model accuracy. This work investigates the privacy–utility trade-off in hierarchical text classification with differential privacy guarantees, and it identifies neural network architectures that offer superior trade-offs. To this end, we use a white-box membership inference attack to e...
The wide adoption and application of Masked language models~(MLMs) on sensitive data (from legal to ...
We study the privacy risks that are associated with training a neural network's weights with self-su...
Graph convolutional networks (GCNs) are a powerful architecture for representation learning on docum...
Hierarchical text classification consists of classifying text documents into a hierarchy of classes ...
Text classifiers are regularly applied to personal texts, leaving users of these classifiers vulnera...
Attacks that aim to identify the training data of neural networks represent a severe threat to the p...
Data holders are increasingly seeking to protect their user’s privacy, whilst still maximizing their...
Does a neural network's privacy have to be at odds with its accuracy? In this work, we study the eff...
This article deals with adversarial attacks towards deep learning systems for Natural Language Proce...
The advent of more powerful cloud compute over the past decade has made it possible to train the dee...
We study a pitfall in the typical workflow for differentially private machine learning. The use of d...
International audienceThis position paper deals with privacy for deep neural networks, more precisel...
Deep Learning (DL) has become increasingly popular in recent years. While DL models can achieve high...
Large capacity machine learning (ML) models are prone to membership inference attacks (MIAs), which ...
Recent work has demonstrated the successful extraction of training data from generative language mod...
The wide adoption and application of Masked language models~(MLMs) on sensitive data (from legal to ...
We study the privacy risks that are associated with training a neural network's weights with self-su...
Graph convolutional networks (GCNs) are a powerful architecture for representation learning on docum...
Hierarchical text classification consists of classifying text documents into a hierarchy of classes ...
Text classifiers are regularly applied to personal texts, leaving users of these classifiers vulnera...
Attacks that aim to identify the training data of neural networks represent a severe threat to the p...
Data holders are increasingly seeking to protect their user’s privacy, whilst still maximizing their...
Does a neural network's privacy have to be at odds with its accuracy? In this work, we study the eff...
This article deals with adversarial attacks towards deep learning systems for Natural Language Proce...
The advent of more powerful cloud compute over the past decade has made it possible to train the dee...
We study a pitfall in the typical workflow for differentially private machine learning. The use of d...
International audienceThis position paper deals with privacy for deep neural networks, more precisel...
Deep Learning (DL) has become increasingly popular in recent years. While DL models can achieve high...
Large capacity machine learning (ML) models are prone to membership inference attacks (MIAs), which ...
Recent work has demonstrated the successful extraction of training data from generative language mod...
The wide adoption and application of Masked language models~(MLMs) on sensitive data (from legal to ...
We study the privacy risks that are associated with training a neural network's weights with self-su...
Graph convolutional networks (GCNs) are a powerful architecture for representation learning on docum...