Large-scale pre-trained models are increasingly adapted to downstream tasks through a new paradigm called prompt learning. In contrast to fine-tuning, prompt learning does not update the pre-trained model's parameters. Instead, it only learns an input perturbation, namely prompt, to be added to the downstream task data for predictions. Given the fast development of prompt learning, a well-generalized prompt inevitably becomes a valuable asset as significant effort and proprietary data are used to create it. This naturally raises the question of whether a prompt may leak the proprietary information of its training data. In this paper, we perform the first comprehensive privacy assessment of prompts learned by visual prompt learning through t...
Many existing privacy-enhanced speech emotion recognition (SER) frameworks focus on perturbing the o...
This article reviews privacy challenges in machine learning and provides a critical overview of the ...
A membership inference attack (MIA) poses privacy risks for the training data of a machine learning ...
Machine learning models are increasingly utilized across impactful domains to predict individual out...
Data is the key factor to drive the development of machine learning (ML) during the past decade. How...
The wide adoption and application of Masked language models~(MLMs) on sensitive data (from legal to ...
We address the problem of defending predictive models, such as machine learning classifiers (Defende...
Data holders are increasingly seeking to protect their user’s privacy, whilst still maximizing their...
International audienceModel explanations provide transparency into a trained machine learning model’...
The right to be forgotten states that a data owner has the right to erase their data from an entity ...
Advancements in Deep Learning (DL) have enabled leveraging large-scale datasets to train models that...
It is observed in the literature that data augmentation can significantly mitigate membership infere...
Fine-tuning is a common and effective method for tailoring large language models (LLMs) to specializ...
Does a neural network's privacy have to be at odds with its accuracy? In this work, we study the eff...
Privacy attacks on Machine Learning (ML) models often focus on inferring the existence of particular...
Many existing privacy-enhanced speech emotion recognition (SER) frameworks focus on perturbing the o...
This article reviews privacy challenges in machine learning and provides a critical overview of the ...
A membership inference attack (MIA) poses privacy risks for the training data of a machine learning ...
Machine learning models are increasingly utilized across impactful domains to predict individual out...
Data is the key factor to drive the development of machine learning (ML) during the past decade. How...
The wide adoption and application of Masked language models~(MLMs) on sensitive data (from legal to ...
We address the problem of defending predictive models, such as machine learning classifiers (Defende...
Data holders are increasingly seeking to protect their user’s privacy, whilst still maximizing their...
International audienceModel explanations provide transparency into a trained machine learning model’...
The right to be forgotten states that a data owner has the right to erase their data from an entity ...
Advancements in Deep Learning (DL) have enabled leveraging large-scale datasets to train models that...
It is observed in the literature that data augmentation can significantly mitigate membership infere...
Fine-tuning is a common and effective method for tailoring large language models (LLMs) to specializ...
Does a neural network's privacy have to be at odds with its accuracy? In this work, we study the eff...
Privacy attacks on Machine Learning (ML) models often focus on inferring the existence of particular...
Many existing privacy-enhanced speech emotion recognition (SER) frameworks focus on perturbing the o...
This article reviews privacy challenges in machine learning and provides a critical overview of the ...
A membership inference attack (MIA) poses privacy risks for the training data of a machine learning ...