We consider distributed (gradient descent-based) learning scenarios where the server combines the gradients of learning objectives gathered from local clients. As individual data collection and learning environments can vary, some clients could transfer erroneous gradients e.g. due to adversarial data or gradient perturbations. Further, for data privacy and security, the identities of such affected clients are often unknown to the server. In such cases, naively aggregating the resulting gradients can mislead the learning process. We propose a new server-side learning algorithm that robustly combines gradients. Our algorithm embeds the local gradients into the manifold of normalized gradients and refines their combinations via simulating a d...
Whether it occurs in artificial or biological substrates, {\it learning} is a {distributed} phenomen...
The paradigm of Federated learning (FL) deals with multiple clients participating in collaborative t...
We study COMP-AMS, a distributed optimization framework based on gradient averaging and adaptive AMS...
Federated learning is a private-by-design distributed learning paradigm where clients train local mo...
For many data-intensive real-world applications, such as recognizing objects from images, detecting ...
Distributed learning paradigms, such as federated and decentralized learning, allow for the coordina...
Federated learning (FL) is an emerging paradigm that permits a large number of clients with heteroge...
Abstract Distributed learning, as the most popular solution for training large-scale data for deep l...
We propose to utilize gradients for detecting adversarial and out-of-distribution samples. We introd...
Federated learning enables multiple users to build a joint model by sharing their model updates (gra...
This paper considers the Byzantine fault-tolerance problem in distributed stochastic gradient descen...
Previous works have proven the superior performance of ensemble-based black-box attacks on transfera...
Abstract. In fully distributed machine learning, privacy and security are impor-tant issues. These i...
In adversarial examples, humans can easily classify the images even though the images are corrupted...
To study the resilience of distributed learning, the "Byzantine" literature considers a strong threa...
Whether it occurs in artificial or biological substrates, {\it learning} is a {distributed} phenomen...
The paradigm of Federated learning (FL) deals with multiple clients participating in collaborative t...
We study COMP-AMS, a distributed optimization framework based on gradient averaging and adaptive AMS...
Federated learning is a private-by-design distributed learning paradigm where clients train local mo...
For many data-intensive real-world applications, such as recognizing objects from images, detecting ...
Distributed learning paradigms, such as federated and decentralized learning, allow for the coordina...
Federated learning (FL) is an emerging paradigm that permits a large number of clients with heteroge...
Abstract Distributed learning, as the most popular solution for training large-scale data for deep l...
We propose to utilize gradients for detecting adversarial and out-of-distribution samples. We introd...
Federated learning enables multiple users to build a joint model by sharing their model updates (gra...
This paper considers the Byzantine fault-tolerance problem in distributed stochastic gradient descen...
Previous works have proven the superior performance of ensemble-based black-box attacks on transfera...
Abstract. In fully distributed machine learning, privacy and security are impor-tant issues. These i...
In adversarial examples, humans can easily classify the images even though the images are corrupted...
To study the resilience of distributed learning, the "Byzantine" literature considers a strong threa...
Whether it occurs in artificial or biological substrates, {\it learning} is a {distributed} phenomen...
The paradigm of Federated learning (FL) deals with multiple clients participating in collaborative t...
We study COMP-AMS, a distributed optimization framework based on gradient averaging and adaptive AMS...