Protecting privacy in gradient-based learning has become increasingly critical as more sensitive information is being used. Many existing solutions seek to protect the sensitive gradients by constraining the overall privacy cost within a constant budget, where the protection is hand-designed and empirically calibrated to boost the utility of the resulting model. However, it remains challenging to choose the proper protection adapted for specific constraints so that the utility is maximized. To this end, we propose a novel Learning-to-Protect algorithm that automatically learns a model-based protector from a set of non-private learning tasks. The learned protector can be applied to private learning tasks to improve utility within the specifi...
International audienceThis work addresses the problem of learning from large collections of data wit...
International audienceFederated Learning allows distributed entities to train a common model collabo...
Many current Internet services rely on inferences from models trained on user data. Commonly, both t...
Differentially private learning tackles tasks where the data are private and the learning process is...
Sensitive data such as medical records and business reports usually contains valuable information th...
We consider training machine learning models using data located on multiple private and geographical...
Machine learning applications in fields where data is sensitive, such as healthcare and banking, fac...
In this paper, we apply machine learning to distributed private data owned by multiple data owners, ...
International audienceThe rise of connected personal devices together with privacy concerns call for...
Because learning sometimes involves sensitive data, machine learning algorithms have been extended t...
In this paper, we study the problem of protecting privacy in recommender systems. We focus on protec...
Privacy restrictions of sensitive data repositories imply that the data analysis is performed in iso...
The past decade has witnessed the fast growth and tremendous success of machine learning. However, r...
This work addresses the problem of learning from large collections of data with privacy guarantees. ...
Brinkrolf J, Berger K, Hammer B. Differential private relevance learning. In: Verleysen M, ed. Proce...
International audienceThis work addresses the problem of learning from large collections of data wit...
International audienceFederated Learning allows distributed entities to train a common model collabo...
Many current Internet services rely on inferences from models trained on user data. Commonly, both t...
Differentially private learning tackles tasks where the data are private and the learning process is...
Sensitive data such as medical records and business reports usually contains valuable information th...
We consider training machine learning models using data located on multiple private and geographical...
Machine learning applications in fields where data is sensitive, such as healthcare and banking, fac...
In this paper, we apply machine learning to distributed private data owned by multiple data owners, ...
International audienceThe rise of connected personal devices together with privacy concerns call for...
Because learning sometimes involves sensitive data, machine learning algorithms have been extended t...
In this paper, we study the problem of protecting privacy in recommender systems. We focus on protec...
Privacy restrictions of sensitive data repositories imply that the data analysis is performed in iso...
The past decade has witnessed the fast growth and tremendous success of machine learning. However, r...
This work addresses the problem of learning from large collections of data with privacy guarantees. ...
Brinkrolf J, Berger K, Hammer B. Differential private relevance learning. In: Verleysen M, ed. Proce...
International audienceThis work addresses the problem of learning from large collections of data wit...
International audienceFederated Learning allows distributed entities to train a common model collabo...
Many current Internet services rely on inferences from models trained on user data. Commonly, both t...