Style transfer has achieved great success and attracted a wide range of attention from both academic and industrial communities due to its flexible application scenarios. However, the dependence on a pretty large VGG-based autoencoder leads to existing style transfer models having high parameter complexities, which limits their applications on resource-constrained devices. Compared with many other tasks, the compression of style transfer models has been less explored. Recently, the lottery ticket hypothesis (LTH) has shown great potential in finding extremely sparse matching subnetworks which can achieve on par or even better performance than the original full networks when trained in isolation. In this work, we for the first time perform a...
Yes. In this paper, we investigate strong lottery tickets in generative models, the subnetworks that...
Large neural networks can be pruned to a small fraction of their original size, with little loss in ...
We propose DGST, a novel and simple Dual-Generator network architecture for text Style Transfer. Our...
peer reviewedWe study the generalization properties of pruned models that are the winners of the lot...
Pre-training serves as a broadly adopted starting point for transfer learning on various downstream ...
The lottery ticket hypothesis conjectures the existence of sparse subnetworks of large randomly init...
In the era of foundation models with huge pre-training budgets, the downstream tasks have been shift...
We have just witnessed an unprecedented booming in the research area of artistic style transfer ever...
The conventional lottery ticket hypothesis (LTH) claims that there exists a sparse subnetwork within...
The lottery ticket hypothesis has sparked the rapid development of pruning algorithms that perform s...
Network pruning is an effective approach to reduce network complexity with acceptable performance co...
Neural style transfer is a deep learning technique that produces an unprecedentedly rich style trans...
Lottery tickets (LTs) is able to discover accurate and sparse subnetworks that could be trained in i...
The lottery ticket hypothesis suggests that sparse, sub-networks of a given neural network, if initi...
Style transfer methods produce a transferred image which is a rendering of a content image in the ma...
Yes. In this paper, we investigate strong lottery tickets in generative models, the subnetworks that...
Large neural networks can be pruned to a small fraction of their original size, with little loss in ...
We propose DGST, a novel and simple Dual-Generator network architecture for text Style Transfer. Our...
peer reviewedWe study the generalization properties of pruned models that are the winners of the lot...
Pre-training serves as a broadly adopted starting point for transfer learning on various downstream ...
The lottery ticket hypothesis conjectures the existence of sparse subnetworks of large randomly init...
In the era of foundation models with huge pre-training budgets, the downstream tasks have been shift...
We have just witnessed an unprecedented booming in the research area of artistic style transfer ever...
The conventional lottery ticket hypothesis (LTH) claims that there exists a sparse subnetwork within...
The lottery ticket hypothesis has sparked the rapid development of pruning algorithms that perform s...
Network pruning is an effective approach to reduce network complexity with acceptable performance co...
Neural style transfer is a deep learning technique that produces an unprecedentedly rich style trans...
Lottery tickets (LTs) is able to discover accurate and sparse subnetworks that could be trained in i...
The lottery ticket hypothesis suggests that sparse, sub-networks of a given neural network, if initi...
Style transfer methods produce a transferred image which is a rendering of a content image in the ma...
Yes. In this paper, we investigate strong lottery tickets in generative models, the subnetworks that...
Large neural networks can be pruned to a small fraction of their original size, with little loss in ...
We propose DGST, a novel and simple Dual-Generator network architecture for text Style Transfer. Our...