The distributed nature of Federated Learning (FL) creates security-related vulnerabilities including training-time attacks. Recently, it has been shown that well-known Byzantine-resilient aggregation schemes are indeed vulnerable to an informed adversary who has access to the aggregation scheme and updates sent by clients. Therefore, it is a significant challenge to establish successful defense mechanisms against such an adversary. To the best of our knowledge, most current aggregators are immune to single or partial attacks and none of them is expandable to defend against new attacks. We frame the robust distributed learning problem as a game between a server and an adversary that tailors training-time attacks. We introduce RobustTailor, a...
Federated learning (FL) is an emerging machine learning paradigm, in which clients jointly learn a m...
Federated learning, as a distributed learning that conducts the training on the local devices withou...
Federated learning allows multiple participants to collaboratively train an efficient model without ...
The distributed nature of Federated Learning (FL) creates security-related vulnerabilities including...
In federated learning (FL), collaborators train a global model collectively without sharing their lo...
Federated learning, as a distributed learning that conducts the training on the local devices withou...
Recently emerged federated learning (FL) is an attractive distributed learning framework in which nu...
Federated learning systems are susceptible to adversarial attacks. To combat this, we introduce a no...
Federated learning (FL) enables multiple clients to collaboratively train an accurate global model w...
Federated Learning (FL) enables many clients to train a joint model without sharing the raw data. Wh...
Federated learning (FL) is a privacy-preserving distributed machine learning paradigm that enables m...
This paper investigates the robustness of over-the-air federated learning to Byzantine attacks. The ...
Federated learning enables training machine learning models on decentralized data sources without ce...
In this paper, we propose a class of robust stochastic subgradient methods for distributed learning ...
In federated learning (FL), a server determines a global learning model by aggregating the local lea...
Federated learning (FL) is an emerging machine learning paradigm, in which clients jointly learn a m...
Federated learning, as a distributed learning that conducts the training on the local devices withou...
Federated learning allows multiple participants to collaboratively train an efficient model without ...
The distributed nature of Federated Learning (FL) creates security-related vulnerabilities including...
In federated learning (FL), collaborators train a global model collectively without sharing their lo...
Federated learning, as a distributed learning that conducts the training on the local devices withou...
Recently emerged federated learning (FL) is an attractive distributed learning framework in which nu...
Federated learning systems are susceptible to adversarial attacks. To combat this, we introduce a no...
Federated learning (FL) enables multiple clients to collaboratively train an accurate global model w...
Federated Learning (FL) enables many clients to train a joint model without sharing the raw data. Wh...
Federated learning (FL) is a privacy-preserving distributed machine learning paradigm that enables m...
This paper investigates the robustness of over-the-air federated learning to Byzantine attacks. The ...
Federated learning enables training machine learning models on decentralized data sources without ce...
In this paper, we propose a class of robust stochastic subgradient methods for distributed learning ...
In federated learning (FL), a server determines a global learning model by aggregating the local lea...
Federated learning (FL) is an emerging machine learning paradigm, in which clients jointly learn a m...
Federated learning, as a distributed learning that conducts the training on the local devices withou...
Federated learning allows multiple participants to collaboratively train an efficient model without ...