Pre-training serves as a broadly adopted starting point for transfer learning on various downstream tasks. Recent investigations of lottery tickets hypothesis (LTH) demonstrate such enormous pre-trained models can be replaced by extremely sparse subnetworks (a.k.a. matching subnetworks) without sacrificing transferability. However, practical security-crucial applications usually pose more challenging requirements beyond standard transfer, which also demand these subnetworks to overcome adversarial vulnerability. In this paper, we formulate a more rigorous concept, Double-Win Lottery Tickets, in which a located subnetwork from a pre-trained model can be independently transferred on diverse downstream tasks, to reach BOTH the same standard an...
The lottery ticket hypothesis conjectures the existence of sparse subnetworks of large randomly init...
The lottery ticket hypothesis has sparked the rapid development of pruning algorithms that aim to re...
Recent advances in generative adversarial networks (GANs) have shown remarkable progress in generati...
peer reviewedWe study the generalization properties of pruned models that are the winners of the lot...
Lottery tickets (LTs) is able to discover accurate and sparse subnetworks that could be trained in i...
Large neural networks can be pruned to a small fraction of their original size, with little loss in ...
In the era of foundation models with huge pre-training budgets, the downstream tasks have been shift...
The lottery ticket hypothesis suggests that sparse, sub-networks of a given neural network, if initi...
Pruning is a standard technique for reducing the computational cost of deep networks. Many advances ...
Robustness to adversarial attacks was shown to require a larger model capacity, and thus a larger me...
The lottery ticket hypothesis (LTH) has shown that dense models contain highly sparse subnetworks (i...
The lottery ticket hypothesis questions the role of overparameterization in supervised deep learning...
Style transfer has achieved great success and attracted a wide range of attention from both academic...
The recent lottery ticket hypothesis proposes that there is at least one sub-network that matches th...
Recent advances in deep learning optimization showed that just a subset of parameters are really nec...
The lottery ticket hypothesis conjectures the existence of sparse subnetworks of large randomly init...
The lottery ticket hypothesis has sparked the rapid development of pruning algorithms that aim to re...
Recent advances in generative adversarial networks (GANs) have shown remarkable progress in generati...
peer reviewedWe study the generalization properties of pruned models that are the winners of the lot...
Lottery tickets (LTs) is able to discover accurate and sparse subnetworks that could be trained in i...
Large neural networks can be pruned to a small fraction of their original size, with little loss in ...
In the era of foundation models with huge pre-training budgets, the downstream tasks have been shift...
The lottery ticket hypothesis suggests that sparse, sub-networks of a given neural network, if initi...
Pruning is a standard technique for reducing the computational cost of deep networks. Many advances ...
Robustness to adversarial attacks was shown to require a larger model capacity, and thus a larger me...
The lottery ticket hypothesis (LTH) has shown that dense models contain highly sparse subnetworks (i...
The lottery ticket hypothesis questions the role of overparameterization in supervised deep learning...
Style transfer has achieved great success and attracted a wide range of attention from both academic...
The recent lottery ticket hypothesis proposes that there is at least one sub-network that matches th...
Recent advances in deep learning optimization showed that just a subset of parameters are really nec...
The lottery ticket hypothesis conjectures the existence of sparse subnetworks of large randomly init...
The lottery ticket hypothesis has sparked the rapid development of pruning algorithms that aim to re...
Recent advances in generative adversarial networks (GANs) have shown remarkable progress in generati...