We demonstrate that there is significant redundancy in the parameterization of several deep learning models. Given only a few weight values for each feature it is possible to accurately predict the remaining values. Moreover, we show that not only can the parameter values be predicted, but many of them need not be learned at all. We train several different architectures by learning only a small number of weights and predicting the rest. In the best case we are able to predict more than 95 % of the weights of a network without any drop in accuracy.
Artificial Intelligence (AI) has made a huge impact on our everyday lives. As a dominant branch of A...
Selecting appropriate parameters while making any prediction model is a tedious task. Often, while c...
In recent years the performance of deep learning algorithms has been demon-strated in a variety of a...
We demonstrate that there is significant redundancy in the parameterization of several deep learning...
We demonstrate that there is significant redundancy in the parameterization of several deep learning...
Deep-learning has made remarkable progress in recent years. However, its parameter estimation method...
The remarkable practical success of deep learning has revealed some major surprises from a theoretic...
Designing uncertainty-aware deep learning models which are able to provide reasonable uncertainties ...
Deep learning has attracted tremendous attention from researchers in various fields of information e...
Recent successes in language modeling, notably with deep learning methods, coincide with a shift fro...
While machine learning is traditionally a resource intensive task, embedded systems, autonomous navi...
The problem Building good predictors on complex domains means learning complicated functions. These ...
We explore unique considerations involved in fitting machine learning (ML) models to data with very ...
The possibility for one to recover the parameters-weights and biases-of a neural network thanks to t...
© 2020 National Academy of Sciences. All rights reserved. While deep learning is successful in a num...
Artificial Intelligence (AI) has made a huge impact on our everyday lives. As a dominant branch of A...
Selecting appropriate parameters while making any prediction model is a tedious task. Often, while c...
In recent years the performance of deep learning algorithms has been demon-strated in a variety of a...
We demonstrate that there is significant redundancy in the parameterization of several deep learning...
We demonstrate that there is significant redundancy in the parameterization of several deep learning...
Deep-learning has made remarkable progress in recent years. However, its parameter estimation method...
The remarkable practical success of deep learning has revealed some major surprises from a theoretic...
Designing uncertainty-aware deep learning models which are able to provide reasonable uncertainties ...
Deep learning has attracted tremendous attention from researchers in various fields of information e...
Recent successes in language modeling, notably with deep learning methods, coincide with a shift fro...
While machine learning is traditionally a resource intensive task, embedded systems, autonomous navi...
The problem Building good predictors on complex domains means learning complicated functions. These ...
We explore unique considerations involved in fitting machine learning (ML) models to data with very ...
The possibility for one to recover the parameters-weights and biases-of a neural network thanks to t...
© 2020 National Academy of Sciences. All rights reserved. While deep learning is successful in a num...
Artificial Intelligence (AI) has made a huge impact on our everyday lives. As a dominant branch of A...
Selecting appropriate parameters while making any prediction model is a tedious task. Often, while c...
In recent years the performance of deep learning algorithms has been demon-strated in a variety of a...