Bayesian inference is known to provide a general framework for incorporating prior knowledge or specific properties into machine learning models via carefully choosing a prior distribution. In this work, we propose a new type of prior distributions for convolutional neural networks, deep weight prior, that in contrast to previously published techniques, favors empirically estimated structure of convolutional filters e.g., spatial correlations of weights. We define deep weight prior as an implicit distribution and propose a method for variational inference with such type of implicit priors. In experiments, we show that deep weight priors can improve the performance of Bayesian neural networks on several problems when training data are limite...
© 7th International Conference on Learning Representations, ICLR 2019. All Rights Reserved. Bayesian...
| openaire: EC/H2020/101016775/EU//INTERVENEEncoding domain knowledge into the prior over the high-d...
Specifying a Bayesian prior is notoriously difficult for complex models such as neural networks. Rea...
Stochastic variational inference for Bayesian deep neural network (DNN) requires specifying priors a...
Isotropic Gaussian priors are the de facto standard for modern Bayesian neural network inference. Ho...
Deep neural networks have bested notable benchmarks across computer vision, reinforcement learning, ...
The Bayesian treatment of neural networks dictates that a prior distribution is specified over their...
International audienceWe investigate deep Bayesian neural networks with Gaussian priors on the weigh...
10 pages, 5 figures, ICML'19 conferenceInternational audienceWe investigate deep Bayesian neural net...
The paper deals with learning probability distributions of observed data by artificial neural networ...
International audienceWe investigate deep Bayesian neural networks with Gaussian priors on the weigh...
What is the best way to exploit extra data -- be it unlabeled data from the same task, or labeled da...
We investigate deep Bayesian neural networks with Gaussian priors on the weights and ReLU-like nonli...
Existing Bayesian treatments of neural networks are typically characterized by weak prior and approx...
While many implementations of Bayesian neural networks use large, complex hierarchical priors, in mu...
© 7th International Conference on Learning Representations, ICLR 2019. All Rights Reserved. Bayesian...
| openaire: EC/H2020/101016775/EU//INTERVENEEncoding domain knowledge into the prior over the high-d...
Specifying a Bayesian prior is notoriously difficult for complex models such as neural networks. Rea...
Stochastic variational inference for Bayesian deep neural network (DNN) requires specifying priors a...
Isotropic Gaussian priors are the de facto standard for modern Bayesian neural network inference. Ho...
Deep neural networks have bested notable benchmarks across computer vision, reinforcement learning, ...
The Bayesian treatment of neural networks dictates that a prior distribution is specified over their...
International audienceWe investigate deep Bayesian neural networks with Gaussian priors on the weigh...
10 pages, 5 figures, ICML'19 conferenceInternational audienceWe investigate deep Bayesian neural net...
The paper deals with learning probability distributions of observed data by artificial neural networ...
International audienceWe investigate deep Bayesian neural networks with Gaussian priors on the weigh...
What is the best way to exploit extra data -- be it unlabeled data from the same task, or labeled da...
We investigate deep Bayesian neural networks with Gaussian priors on the weights and ReLU-like nonli...
Existing Bayesian treatments of neural networks are typically characterized by weak prior and approx...
While many implementations of Bayesian neural networks use large, complex hierarchical priors, in mu...
© 7th International Conference on Learning Representations, ICLR 2019. All Rights Reserved. Bayesian...
| openaire: EC/H2020/101016775/EU//INTERVENEEncoding domain knowledge into the prior over the high-d...
Specifying a Bayesian prior is notoriously difficult for complex models such as neural networks. Rea...