Abstract Distributed learning, as the most popular solution for training large-scale data for deep learning, consists of multiple participants collaborating on data training tasks. However, the malicious behavior of some during the training process, like Byzantine participants who would interrupt or control the learning process, will trigger the crisis of data security. Although recent existing defense mechanisms use the variability of Byzantine node gradients to clear Byzantine values, it is still unable to identify and then clear the delicate disturbance/attack. To address this critical issue, we propose an algorithm named consensus aggregation in this paper. This algorithm allows computational nodes to use the information of verification...
In this paper, we propose a class of robust stochastic subgradient methods for distributed learning ...
A very common optimization technique in Machine Learning is Stochastic Gradient Descent (SGD). SGD c...
To study the resilience of distributed learning, the "Byzantine" literature considers a strong threa...
A distributed system consists of networked components that interact with each other in order to achi...
Distributed learning paradigms, such as federated and decentralized learning, allow for the coordina...
In federated learning (FL), a server determines a global learning model by aggregating the local lea...
We consider distributed (gradient descent-based) learning scenarios where the server combines the gr...
In a massive IoT systems, large amount of data are collected and stored in clouds, edge devices, and...
Accuracy obtained when training deep learning models with large amounts of data is high, however, tr...
Data poisoning attacks aim at manipulating model behaviors through distorting training data. Previou...
none3noRecent trends such as the Internet of Things and pervasive computing demand for novel enginee...
Federated learning allows multiple participants to collaboratively train an efficient model without ...
Federated learning is a framework for multiple devices or institutions, called local clients, to col...
For many data-intensive real-world applications, such as recognizing objects from images, detecting ...
In-netvork aggregation is an important paradigm for current and future networked systems, enabling e...
In this paper, we propose a class of robust stochastic subgradient methods for distributed learning ...
A very common optimization technique in Machine Learning is Stochastic Gradient Descent (SGD). SGD c...
To study the resilience of distributed learning, the "Byzantine" literature considers a strong threa...
A distributed system consists of networked components that interact with each other in order to achi...
Distributed learning paradigms, such as federated and decentralized learning, allow for the coordina...
In federated learning (FL), a server determines a global learning model by aggregating the local lea...
We consider distributed (gradient descent-based) learning scenarios where the server combines the gr...
In a massive IoT systems, large amount of data are collected and stored in clouds, edge devices, and...
Accuracy obtained when training deep learning models with large amounts of data is high, however, tr...
Data poisoning attacks aim at manipulating model behaviors through distorting training data. Previou...
none3noRecent trends such as the Internet of Things and pervasive computing demand for novel enginee...
Federated learning allows multiple participants to collaboratively train an efficient model without ...
Federated learning is a framework for multiple devices or institutions, called local clients, to col...
For many data-intensive real-world applications, such as recognizing objects from images, detecting ...
In-netvork aggregation is an important paradigm for current and future networked systems, enabling e...
In this paper, we propose a class of robust stochastic subgradient methods for distributed learning ...
A very common optimization technique in Machine Learning is Stochastic Gradient Descent (SGD). SGD c...
To study the resilience of distributed learning, the "Byzantine" literature considers a strong threa...