Deep neural networks have delivered remarkable performance and have been widely used in various visual tasks. However, their huge size causes significant inconvenience for transmission and storage. Many previous studies have explored model size compression. However, these studies often approach various lossy and lossless compression methods in isolation, leading to challenges in achieving high compression ratios efficiently. This work proposes a post-training model size compression method that combines lossy and lossless compression in a unified way. We first propose a unified parametric weight transformation, which ensures different lossy compression methods can be performed jointly in a post-training manner. Then, a dedicated differentiab...
Neural networks have gained widespread use in many machine learning tasks due to their state-of-the-...
Inference time, model size, and accuracy are three key factors in deep model compression. Most of ...
With the increasing popularity of deep learning in image processing, many learned lossless image com...
The success of overparameterized deep neural networks (DNNs) poses a great challenge to deploy compu...
The large memory requirements of deep neural networks limit their deployment and adoption on many de...
In this thesis we seek to make advances towards the goal of effective learned compression. This enta...
End-to-end deep trainable models are about to exceed the performance of the traditional handcrafted ...
Modern iterations of deep learning models contain millions (billions) of unique parameters-each repr...
Today\u27s deep neural networks (DNNs) are becoming deeper and wider because of increasing demand on...
© 7th International Conference on Learning Representations, ICLR 2019. All Rights Reserved. While de...
International audienceThe computational workload involved in Convolutional Neural Networks (CNNs) is...
While deep neural networks are a highly successful model class, their large memory footprint puts co...
Deploying neural network models to edge devices is becoming increasingly popular because such deploy...
Deep learning methods have exhibited the great capacity to process object detection tasks, offering ...
We develop a simple and elegant method for lossless compression using latent variable models, which ...
Neural networks have gained widespread use in many machine learning tasks due to their state-of-the-...
Inference time, model size, and accuracy are three key factors in deep model compression. Most of ...
With the increasing popularity of deep learning in image processing, many learned lossless image com...
The success of overparameterized deep neural networks (DNNs) poses a great challenge to deploy compu...
The large memory requirements of deep neural networks limit their deployment and adoption on many de...
In this thesis we seek to make advances towards the goal of effective learned compression. This enta...
End-to-end deep trainable models are about to exceed the performance of the traditional handcrafted ...
Modern iterations of deep learning models contain millions (billions) of unique parameters-each repr...
Today\u27s deep neural networks (DNNs) are becoming deeper and wider because of increasing demand on...
© 7th International Conference on Learning Representations, ICLR 2019. All Rights Reserved. While de...
International audienceThe computational workload involved in Convolutional Neural Networks (CNNs) is...
While deep neural networks are a highly successful model class, their large memory footprint puts co...
Deploying neural network models to edge devices is becoming increasingly popular because such deploy...
Deep learning methods have exhibited the great capacity to process object detection tasks, offering ...
We develop a simple and elegant method for lossless compression using latent variable models, which ...
Neural networks have gained widespread use in many machine learning tasks due to their state-of-the-...
Inference time, model size, and accuracy are three key factors in deep model compression. Most of ...
With the increasing popularity of deep learning in image processing, many learned lossless image com...