peer reviewedWe study the generalization properties of pruned models that are the winners of the lottery ticket hypothesis on photorealistic datasets. We analyse their potential under conditions in which training data is scarce and comes from a not-photorealistic domain. More specifically, we investigate whether pruned models that are found on the popular CIFAR-10/100 and Fashion-MNIST datasets, generalize to seven different datasets coming from the fields of digital pathology and digital heritage. Our results show that there are significant benefits in training sparse architectures over larger parametrized models, since in all of our experiments pruned networks significantly outperform their larger unpruned counterparts. These results sugg...
Network pruning is an effective approach to reduce network complexity with acceptable performance co...
The recent lottery ticket hypothesis proposes that there is at least one sub-network that matches th...
In the era of foundation models with huge pre-training budgets, the downstream tasks have been shift...
The lottery ticket hypothesis suggests that sparse, sub-networks of a given neural network, if initi...
The lottery ticket hypothesis has sparked the rapid development of pruning algorithms that aim to re...
The lottery ticket hypothesis conjectures the existence of sparse subnetworks of large randomly init...
The lottery ticket hypothesis questions the role of overparameterization in supervised deep learning...
Pre-training serves as a broadly adopted starting point for transfer learning on various downstream ...
Pruning is a standard technique for reducing the computational cost of deep networks. Many advances ...
The Lottery Ticket Hypothesis continues to have a profound practical impact on the quest for small s...
Recent advances in deep learning optimization showed that just a subset of parameters are really nec...
Large neural networks can be pruned to a small fraction of their original size, with little loss in ...
Pruning refers to the elimination of trivial weights from neural networks. The sub-networks within a...
The strong lottery ticket hypothesis holds the promise that pruning randomly initialized deep neural...
The conventional lottery ticket hypothesis (LTH) claims that there exists a sparse subnetwork within...
Network pruning is an effective approach to reduce network complexity with acceptable performance co...
The recent lottery ticket hypothesis proposes that there is at least one sub-network that matches th...
In the era of foundation models with huge pre-training budgets, the downstream tasks have been shift...
The lottery ticket hypothesis suggests that sparse, sub-networks of a given neural network, if initi...
The lottery ticket hypothesis has sparked the rapid development of pruning algorithms that aim to re...
The lottery ticket hypothesis conjectures the existence of sparse subnetworks of large randomly init...
The lottery ticket hypothesis questions the role of overparameterization in supervised deep learning...
Pre-training serves as a broadly adopted starting point for transfer learning on various downstream ...
Pruning is a standard technique for reducing the computational cost of deep networks. Many advances ...
The Lottery Ticket Hypothesis continues to have a profound practical impact on the quest for small s...
Recent advances in deep learning optimization showed that just a subset of parameters are really nec...
Large neural networks can be pruned to a small fraction of their original size, with little loss in ...
Pruning refers to the elimination of trivial weights from neural networks. The sub-networks within a...
The strong lottery ticket hypothesis holds the promise that pruning randomly initialized deep neural...
The conventional lottery ticket hypothesis (LTH) claims that there exists a sparse subnetwork within...
Network pruning is an effective approach to reduce network complexity with acceptable performance co...
The recent lottery ticket hypothesis proposes that there is at least one sub-network that matches th...
In the era of foundation models with huge pre-training budgets, the downstream tasks have been shift...