Quantization plays an important role in deep neural network (DNN) hardware. In particular, logarithmic quantization has multiple advantages for DNN hardware implementations, and its weakness in terms of lower performance at high precision compared with linear quantization has been recently remedied by what we call selective two-word logarithmic quantization (STLQ). However, there is a lack of training methods designed for STLQ or even logarithmic quantization in general. In this paper we propose a novel STLQ-aware training method, which significantly outperforms the previous state-of-the-art training method for STLQ. Moreover, our training results demonstrate that with our new training method, STLQ applied to weight parameters of ResNet-18 ...
We introduce a Power-of-Two low-bit post-training quantization(PTQ) method for deep neural network t...
In recent years Deep Neural Networks (DNNs) have been rapidly developed in various applications, tog...
This work considers a challenging Deep Neural Network(DNN) quantization task that seeks to train qua...
Despite the multifaceted benefits of stochastic computing (SC) such as low cost, low power, and flex...
In this paper we propose a novel logarithmic quantization-based DNN (Deep Neural Network) architectu...
Quantization of the weights and activations is one of the main methods to reduce the computational f...
For energy efficiency, many low-bit quantization methods for deep neural networks (DNNs) have been p...
Convolutional Neural Networks (CNN) have become a popular solution for computer vision problems. How...
Quantization of deep neural networks is a common way to optimize the networks for deployment on ener...
Quantization has shown stunning efficiency on deep neural network, especially for portable devices w...
Network quantization is an effective solution to compress deep neural networks for practical usage. ...
The advancement of deep models poses great challenges to real-world deployment because of the limite...
Today, computer vision (CV) problems are solved with unprecedented accuracy using convolutional neur...
We consider the post-training quantization problem, which discretizes the weights of pre-trained dee...
Quantization of deep neural networks is extremely essential for efficient implementations. Low-preci...
We introduce a Power-of-Two low-bit post-training quantization(PTQ) method for deep neural network t...
In recent years Deep Neural Networks (DNNs) have been rapidly developed in various applications, tog...
This work considers a challenging Deep Neural Network(DNN) quantization task that seeks to train qua...
Despite the multifaceted benefits of stochastic computing (SC) such as low cost, low power, and flex...
In this paper we propose a novel logarithmic quantization-based DNN (Deep Neural Network) architectu...
Quantization of the weights and activations is one of the main methods to reduce the computational f...
For energy efficiency, many low-bit quantization methods for deep neural networks (DNNs) have been p...
Convolutional Neural Networks (CNN) have become a popular solution for computer vision problems. How...
Quantization of deep neural networks is a common way to optimize the networks for deployment on ener...
Quantization has shown stunning efficiency on deep neural network, especially for portable devices w...
Network quantization is an effective solution to compress deep neural networks for practical usage. ...
The advancement of deep models poses great challenges to real-world deployment because of the limite...
Today, computer vision (CV) problems are solved with unprecedented accuracy using convolutional neur...
We consider the post-training quantization problem, which discretizes the weights of pre-trained dee...
Quantization of deep neural networks is extremely essential for efficient implementations. Low-preci...
We introduce a Power-of-Two low-bit post-training quantization(PTQ) method for deep neural network t...
In recent years Deep Neural Networks (DNNs) have been rapidly developed in various applications, tog...
This work considers a challenging Deep Neural Network(DNN) quantization task that seeks to train qua...