Pruning deep neural networks is a widely used strategy to alleviate the computational burden in machine learning. Overwhelming empirical evidence suggests that pruned models retain very high accuracy even with a tiny fraction of parameters. However, relatively little work has gone into characterising the small pruned networks obtained, beyond a measure of their accuracy. In this paper, we use the sparse double descent approach to identify univocally and characterise pruned models associated with classification tasks. We observe empirically that, for a given task, iterative magnitude pruning (IMP) tends to converge to networks of comparable sizes even when starting from full networks with sizes ranging over orders of magnitude. We analyse th...
Pruning neural networks has become popular in the last decade when it was shown that a large number ...
Modern deep neural networks require a significant amount of computing time and power to train and de...
2020 Spring.Includes bibliographical references.Deep neural networks are computational and memory in...
Pruning refers to the elimination of trivial weights from neural networks. The sub-networks within a...
Network pruning is an effective approach to reduce network complexity with acceptable performance co...
Recent advances in deep learning optimization showed that just a subset of parameters are really nec...
The lottery ticket hypothesis has sparked the rapid development of pruning algorithms that perform s...
The lottery ticket hypothesis questions the role of overparameterization in supervised deep learning...
Deep networks are typically trained with many more parameters than the size of the training dataset....
The strong lottery ticket hypothesis holds the promise that pruning randomly initialized deep neural...
Deep learning-based side-channel analysis (SCA) represents a strong approach for profiling attacks. ...
Random masks define surprisingly effective sparse neural network models, as has been shown empirical...
Recent methods in network pruning have indicated that a dense neural network involves a sparse subne...
peer reviewedWe study the generalization properties of pruned models that are the winners of the lot...
The lottery ticket hypothesis suggests that sparse, sub-networks of a given neural network, if initi...
Pruning neural networks has become popular in the last decade when it was shown that a large number ...
Modern deep neural networks require a significant amount of computing time and power to train and de...
2020 Spring.Includes bibliographical references.Deep neural networks are computational and memory in...
Pruning refers to the elimination of trivial weights from neural networks. The sub-networks within a...
Network pruning is an effective approach to reduce network complexity with acceptable performance co...
Recent advances in deep learning optimization showed that just a subset of parameters are really nec...
The lottery ticket hypothesis has sparked the rapid development of pruning algorithms that perform s...
The lottery ticket hypothesis questions the role of overparameterization in supervised deep learning...
Deep networks are typically trained with many more parameters than the size of the training dataset....
The strong lottery ticket hypothesis holds the promise that pruning randomly initialized deep neural...
Deep learning-based side-channel analysis (SCA) represents a strong approach for profiling attacks. ...
Random masks define surprisingly effective sparse neural network models, as has been shown empirical...
Recent methods in network pruning have indicated that a dense neural network involves a sparse subne...
peer reviewedWe study the generalization properties of pruned models that are the winners of the lot...
The lottery ticket hypothesis suggests that sparse, sub-networks of a given neural network, if initi...
Pruning neural networks has become popular in the last decade when it was shown that a large number ...
Modern deep neural networks require a significant amount of computing time and power to train and de...
2020 Spring.Includes bibliographical references.Deep neural networks are computational and memory in...