Deep neural networks have shown incredible performance for inference tasks in a variety of domains. Unfortunately, most current deep networks are enormous cloud-based structures that require significant storage space, which limits scaling of deep learning as a service (DLaaS) and use for on-device intelligence. This work is concerned with finding universal lossless compressed representations of deep feedforward networks with synaptic weights drawn from discrete sets, and directly performing inference without full decompression. The basic insight that allows less rate than naive approaches is recognizing that the bipartite graph layers of feedforward networks have a kind of permutation invariance to the labeling of nodes, in terms of inferen...
We formulate the entropy of a quantized artificial neural network as a differentiable function that ...
Convolutional Neural Networks (CNNs) are brain-inspired computational models designed to recognize p...
The practical successes of deep neural networks have not been matched by theoretical progress that s...
Compression technologies for deep neural networks (DNNs), such as weight quantization, have been wid...
In recent years, Deep Neural Networks (DNNs) have become an area of high interest due to it's ground...
Modern iterations of deep learning models contain millions (billions) of unique parameters-each repr...
Today\u27s deep neural networks (DNNs) are becoming deeper and wider because of increasing demand on...
Modern compression algorithms are the result of years of research; industry standards such as MP3, J...
Deep neural networks (DNNs) continue to make significant advances, solving tasks from image classifi...
International audienceWe examine a class of stochastic deep learning models with a tractable method ...
There has been a continuous evolution in deep neural network architectures since Alex Krizhevsky pro...
While Deep Neural Networks (DNNs) have recently achieved impressive results on many classification t...
abstract: Deep neural networks (DNNs) have had tremendous success in a variety of statistical learn...
While deep neural networks are a highly successful model class, their large memory footprint puts co...
While deep neural networks are a highly successful model class, their large memory footprint puts co...
We formulate the entropy of a quantized artificial neural network as a differentiable function that ...
Convolutional Neural Networks (CNNs) are brain-inspired computational models designed to recognize p...
The practical successes of deep neural networks have not been matched by theoretical progress that s...
Compression technologies for deep neural networks (DNNs), such as weight quantization, have been wid...
In recent years, Deep Neural Networks (DNNs) have become an area of high interest due to it's ground...
Modern iterations of deep learning models contain millions (billions) of unique parameters-each repr...
Today\u27s deep neural networks (DNNs) are becoming deeper and wider because of increasing demand on...
Modern compression algorithms are the result of years of research; industry standards such as MP3, J...
Deep neural networks (DNNs) continue to make significant advances, solving tasks from image classifi...
International audienceWe examine a class of stochastic deep learning models with a tractable method ...
There has been a continuous evolution in deep neural network architectures since Alex Krizhevsky pro...
While Deep Neural Networks (DNNs) have recently achieved impressive results on many classification t...
abstract: Deep neural networks (DNNs) have had tremendous success in a variety of statistical learn...
While deep neural networks are a highly successful model class, their large memory footprint puts co...
While deep neural networks are a highly successful model class, their large memory footprint puts co...
We formulate the entropy of a quantized artificial neural network as a differentiable function that ...
Convolutional Neural Networks (CNNs) are brain-inspired computational models designed to recognize p...
The practical successes of deep neural networks have not been matched by theoretical progress that s...