Data-free quantization can potentially address data privacy and security concerns in model compression, and thus has been widely investigated. Recently, PSAQ-ViT designs a relative value metric, patch similarity, to generate data from pre-trained vision transformers (ViTs), achieving the first attempt at data-free quantization for ViTs. In this paper, we propose PSAQ-ViT V2, a more accurate and general data-free quantization framework for ViTs, built on top of PSAQ-ViT. More specifically, following the patch similarity metric in PSAQ-ViT, we introduce an adaptive teacher-student strategy, which facilitates the constant cyclic evolution of the generated samples and the quantized model (student) in a competitive and interactive fashion under ...
Zero-shot quantization is a promising approach for developing lightweight deep neural networks when ...
Vision Transformer (ViT) has recently gained significant interest in solving computer vision (CV) pr...
Vision Transformer (ViT) architectures are becoming increasingly popular and widely employed to tack...
Data-free quantization can potentially address data privacy and security concerns in model compressi...
Vision transformers have recently gained great success on various computer vision tasks; nevertheles...
Network quantization significantly reduces model inference complexity and has been widely used in re...
Vision Transformers (ViTs) have achieved state-of-the-art performance on various computer vision app...
Quantization is one of the most effective methods to compress neural networks, which has achieved gr...
In this paper, we propose a fully differentiable quantization method for vision transformer (ViT) na...
We explore the capability of plain Vision Transformers (ViTs) for semantic segmentation and propose ...
This paper investigates the capability of plain Vision Transformers (ViTs) for semantic segmentation...
Data-free quantization aims to achieve model quantization without accessing any authentic sample. It...
Pretraining language models with next-token prediction on massive text corpora has delivered phenome...
Vision transformers (ViTs) have recently obtained success in many applications, but their intensive ...
Neural network quantization aims to accelerate and trim full-precision neural network models by usin...
Zero-shot quantization is a promising approach for developing lightweight deep neural networks when ...
Vision Transformer (ViT) has recently gained significant interest in solving computer vision (CV) pr...
Vision Transformer (ViT) architectures are becoming increasingly popular and widely employed to tack...
Data-free quantization can potentially address data privacy and security concerns in model compressi...
Vision transformers have recently gained great success on various computer vision tasks; nevertheles...
Network quantization significantly reduces model inference complexity and has been widely used in re...
Vision Transformers (ViTs) have achieved state-of-the-art performance on various computer vision app...
Quantization is one of the most effective methods to compress neural networks, which has achieved gr...
In this paper, we propose a fully differentiable quantization method for vision transformer (ViT) na...
We explore the capability of plain Vision Transformers (ViTs) for semantic segmentation and propose ...
This paper investigates the capability of plain Vision Transformers (ViTs) for semantic segmentation...
Data-free quantization aims to achieve model quantization without accessing any authentic sample. It...
Pretraining language models with next-token prediction on massive text corpora has delivered phenome...
Vision transformers (ViTs) have recently obtained success in many applications, but their intensive ...
Neural network quantization aims to accelerate and trim full-precision neural network models by usin...
Zero-shot quantization is a promising approach for developing lightweight deep neural networks when ...
Vision Transformer (ViT) has recently gained significant interest in solving computer vision (CV) pr...
Vision Transformer (ViT) architectures are becoming increasingly popular and widely employed to tack...