© 2020Large-scale datasets play a fundamental role in training deep learning models. However, dataset collection is difficult in domains that involve sensitive information. Collaborative learning techniques provide a privacy-preserving solution, by enabling training over a number of private datasets that are not shared by their owners. However, recently, it has been shown that the existing collaborative learning frameworks are vulnerable to an active adversary that runs a generative adversarial network (GAN) attack. In this work, we propose a novel classification model that is resilient against such attacks by design. More specifically, we introduce a key-based classification model and a principled training scheme that protects class scores...
Data privacy in machine learning has become an urgent problem to be solved, along with machine learn...
How can multiple distributed entities train a shared deep net on their private data while protecting...
In this paper, we address the problem of privacy-preserving training and evaluation of neural networ...
Data privacy has become an increasingly important issue in Machine Learning (ML), where many approac...
ISSN: 0885-6125 (Print) 1573-0565 (Online)International audienceWe introduce a deep learning framewo...
Data privacy has become an increasingly important issue in Machine Learning (ML), where many approac...
While machine learning (ML) has made tremendous progress during the past decade, recent research has...
Distributed deep learning has potential for significant impact in preserving data privacy and improv...
AI\u27s applicability across diverse fields is hindered by data sensitivity, privacy concerns, and l...
Since their inception Generative Adversarial Networks (GANs) have been popular generative models acr...
this work has been also presented in SPML19, ICML Workshop on Security and Privacy of Machine Learni...
Federated Learning (FL) has emerged as a potentially powerful privacy-preserving machine learning me...
Deep Generative Models (DGMs) allow users to synthesize data from complex, high-dimensional manifold...
In this paper, we introduce a data augmentation-based defense strategy for preventing the reconstruc...
Medical data is frequently quite sensitive in terms of data privacy and security. Federated learning...
Data privacy in machine learning has become an urgent problem to be solved, along with machine learn...
How can multiple distributed entities train a shared deep net on their private data while protecting...
In this paper, we address the problem of privacy-preserving training and evaluation of neural networ...
Data privacy has become an increasingly important issue in Machine Learning (ML), where many approac...
ISSN: 0885-6125 (Print) 1573-0565 (Online)International audienceWe introduce a deep learning framewo...
Data privacy has become an increasingly important issue in Machine Learning (ML), where many approac...
While machine learning (ML) has made tremendous progress during the past decade, recent research has...
Distributed deep learning has potential for significant impact in preserving data privacy and improv...
AI\u27s applicability across diverse fields is hindered by data sensitivity, privacy concerns, and l...
Since their inception Generative Adversarial Networks (GANs) have been popular generative models acr...
this work has been also presented in SPML19, ICML Workshop on Security and Privacy of Machine Learni...
Federated Learning (FL) has emerged as a potentially powerful privacy-preserving machine learning me...
Deep Generative Models (DGMs) allow users to synthesize data from complex, high-dimensional manifold...
In this paper, we introduce a data augmentation-based defense strategy for preventing the reconstruc...
Medical data is frequently quite sensitive in terms of data privacy and security. Federated learning...
Data privacy in machine learning has become an urgent problem to be solved, along with machine learn...
How can multiple distributed entities train a shared deep net on their private data while protecting...
In this paper, we address the problem of privacy-preserving training and evaluation of neural networ...