Binarization of feature representation is critical for Binarized Neural Networks (BNNs). Currently, sign function is the commonly used method for feature binarization. Although it works well on small datasets, the performance on ImageNet remains unsatisfied. Previous methods mainly focus on minimizing quantization error, improving the training strategies and decomposing each convolution layer into several binary convolution modules. However, whether sign is the only option for binarization has been largely overlooked. In this work, we propose the Sparsity-inducing Binarized Neural Network (Si-BNN), to quantize the activations to be either 0 or +1, which introduces sparsity into binary representation. We further introduce trainable threshold...
As deep neural networks grow larger, they suffer from a huge number of weights, and thus reducing th...
Model binarization is an effective method of compressing neural networks and accelerating their infe...
Quantizing weights and activations of deep neural networks is essential for deploying them in resour...
Binarization of feature representation is critical for Binarized Neural Networks (BNNs). Currently, ...
Binary neural networks (BNNs) have attracted broad research interest due to their efficient storage ...
We present a method to train self-binarizing neural networks, that is, networks that evolve their we...
Neural network binarization accelerates deep models by quantizing their weights and activations into...
Binary neural networks (BNNs) are an extremely promising method for reducing deep neural networks’ c...
Binary Neural Networks (BNNs) are receiving an upsurge of attention for bringing power-hungry deep l...
International audienceTraining and running deep neural networks (NNs) often demands a lot of computa...
Thesis (Ph.D.)--University of Washington, 2020The recent renaissance of deep neural networks has lea...
Network binarization (i.e., binary neural networks, BNNs) can efficiently compress deep neural netwo...
Binary Convolutional Neural Networks (CNNs) have significantly reduced the number of arithmetic oper...
How to train a binary neural network (BinaryNet) with both high compression rate and high accuracy o...
Efficient inference of Deep Neural Networks (DNNs) is essential to making AI ubiquitous. Two importa...
As deep neural networks grow larger, they suffer from a huge number of weights, and thus reducing th...
Model binarization is an effective method of compressing neural networks and accelerating their infe...
Quantizing weights and activations of deep neural networks is essential for deploying them in resour...
Binarization of feature representation is critical for Binarized Neural Networks (BNNs). Currently, ...
Binary neural networks (BNNs) have attracted broad research interest due to their efficient storage ...
We present a method to train self-binarizing neural networks, that is, networks that evolve their we...
Neural network binarization accelerates deep models by quantizing their weights and activations into...
Binary neural networks (BNNs) are an extremely promising method for reducing deep neural networks’ c...
Binary Neural Networks (BNNs) are receiving an upsurge of attention for bringing power-hungry deep l...
International audienceTraining and running deep neural networks (NNs) often demands a lot of computa...
Thesis (Ph.D.)--University of Washington, 2020The recent renaissance of deep neural networks has lea...
Network binarization (i.e., binary neural networks, BNNs) can efficiently compress deep neural netwo...
Binary Convolutional Neural Networks (CNNs) have significantly reduced the number of arithmetic oper...
How to train a binary neural network (BinaryNet) with both high compression rate and high accuracy o...
Efficient inference of Deep Neural Networks (DNNs) is essential to making AI ubiquitous. Two importa...
As deep neural networks grow larger, they suffer from a huge number of weights, and thus reducing th...
Model binarization is an effective method of compressing neural networks and accelerating their infe...
Quantizing weights and activations of deep neural networks is essential for deploying them in resour...