While analyzing high dimensional data-sets using deep neural network (NN), increased sparsity is desirable but requires careful selection of sparsity parameters. In this paper, a novel distributed learning methodology is proposed to optimize the NN while addressing this challenge. To address this challenge, the optimal sparsity in the NN is estimated via a two player zero-sum game in the paper. In the proposed game, sparsity parameter is the first player with the aim of increasing sparsity in the NN while NN weights is the second player with the goal of improving its performance in the presence of increased sparsity. To solve the game, additional variables are introduced into the optimization problem such that the output at every layer in...
The feed-forward neural network (FNN) has drawn great interest in many applications due to its unive...
This thesis presents a few methods to accelerate the inference of Deep Neural Networks that are lar...
International audienceIn distributed optimization for large-scale learning, a major performance limi...
International audienceSparsifying deep neural networks is of paramount interest in many areas, espec...
The growing energy and performance costs of deep learning have driven the community to reduce the si...
Sparse learning, deep networks, and adversarial learning are new paradigms and have received signifi...
In this paper, a sparsity promoting adaptive algorithm for distributed learning in diffusion network...
Training sparse neural networks with adaptive connectivity is an active research topic. Such network...
Deep learning is finding its way into the embedded world with applications such as autonomous drivin...
Communication overhead is one of the major obstacles to train large deep learning models at scale. G...
We compare classic scalar temporal difference learning with three new distributional algorithms for ...
The aim of this paper is to develop a general framework for training neural networks (NNs) in a dist...
In this paper, we address the challenging task of simultaneously optimizing (i) the weights of a neu...
Abstract—Hierarchical deep neural networks are currently popular learning models for imitating the h...
Deep learning has been empirically successful in recent years thanks to the extremely over-parameter...
The feed-forward neural network (FNN) has drawn great interest in many applications due to its unive...
This thesis presents a few methods to accelerate the inference of Deep Neural Networks that are lar...
International audienceIn distributed optimization for large-scale learning, a major performance limi...
International audienceSparsifying deep neural networks is of paramount interest in many areas, espec...
The growing energy and performance costs of deep learning have driven the community to reduce the si...
Sparse learning, deep networks, and adversarial learning are new paradigms and have received signifi...
In this paper, a sparsity promoting adaptive algorithm for distributed learning in diffusion network...
Training sparse neural networks with adaptive connectivity is an active research topic. Such network...
Deep learning is finding its way into the embedded world with applications such as autonomous drivin...
Communication overhead is one of the major obstacles to train large deep learning models at scale. G...
We compare classic scalar temporal difference learning with three new distributional algorithms for ...
The aim of this paper is to develop a general framework for training neural networks (NNs) in a dist...
In this paper, we address the challenging task of simultaneously optimizing (i) the weights of a neu...
Abstract—Hierarchical deep neural networks are currently popular learning models for imitating the h...
Deep learning has been empirically successful in recent years thanks to the extremely over-parameter...
The feed-forward neural network (FNN) has drawn great interest in many applications due to its unive...
This thesis presents a few methods to accelerate the inference of Deep Neural Networks that are lar...
International audienceIn distributed optimization for large-scale learning, a major performance limi...