Network pruning is an effective approach to reduce network complexity with acceptable performance compromise. Existing studies achieve the sparsity of neural networks via time-consuming weight training or complex searching on networks with expanded width, which greatly limits the applications of network pruning. In this paper, we show that high-performing and sparse sub-networks without the involvement of weight training, termed "lottery jackpots", exist in pre-trained models with unexpanded width. Furthermore, we improve the efficiency for searching lottery jackpots from two perspectives. Firstly, we observe that the sparse masks derived from many existing pruning criteria have a high overlap with the searched mask of our lottery jackpot, ...
The lottery ticket hypothesis questions the role of overparameterization in supervised deep learning...
Neural network pruning is useful for discovering efficient, high-performing subnetworks within pre-t...
peer reviewedWe study the generalization properties of pruned models that are the winners of the lot...
Network pruning is an effective approach to reduce network complexity with acceptable performance co...
The lottery ticket hypothesis has sparked the rapid development of pruning algorithms that perform s...
Pruning refers to the elimination of trivial weights from neural networks. The sub-networks within a...
Large neural networks can be pruned to a small fraction of their original size, with little loss in ...
Pruning is a standard technique for reducing the computational cost of deep networks. Many advances ...
Random masks define surprisingly effective sparse neural network models, as has been shown empirical...
The lottery ticket hypothesis conjectures the existence of sparse subnetworks of large randomly init...
Recent advances in deep learning optimization showed that just a subset of parameters are really nec...
Pruning deep neural networks is a widely used strategy to alleviate the computational burden in mach...
The strong lottery ticket hypothesis has highlighted the potential for training deep neural networks...
The strong lottery ticket hypothesis holds the promise that pruning randomly initialized deep neural...
Modern deep neural networks require a significant amount of computing time and power to train and de...
The lottery ticket hypothesis questions the role of overparameterization in supervised deep learning...
Neural network pruning is useful for discovering efficient, high-performing subnetworks within pre-t...
peer reviewedWe study the generalization properties of pruned models that are the winners of the lot...
Network pruning is an effective approach to reduce network complexity with acceptable performance co...
The lottery ticket hypothesis has sparked the rapid development of pruning algorithms that perform s...
Pruning refers to the elimination of trivial weights from neural networks. The sub-networks within a...
Large neural networks can be pruned to a small fraction of their original size, with little loss in ...
Pruning is a standard technique for reducing the computational cost of deep networks. Many advances ...
Random masks define surprisingly effective sparse neural network models, as has been shown empirical...
The lottery ticket hypothesis conjectures the existence of sparse subnetworks of large randomly init...
Recent advances in deep learning optimization showed that just a subset of parameters are really nec...
Pruning deep neural networks is a widely used strategy to alleviate the computational burden in mach...
The strong lottery ticket hypothesis has highlighted the potential for training deep neural networks...
The strong lottery ticket hypothesis holds the promise that pruning randomly initialized deep neural...
Modern deep neural networks require a significant amount of computing time and power to train and de...
The lottery ticket hypothesis questions the role of overparameterization in supervised deep learning...
Neural network pruning is useful for discovering efficient, high-performing subnetworks within pre-t...
peer reviewedWe study the generalization properties of pruned models that are the winners of the lot...