Gaussian multiplicative noise is commonly used as a stochastic regularisation technique in training of deterministic neural networks. A recent paper reinterpreted the technique as a specific algorithm for approximate inference in Bayesian neural networks; several extensions ensued. We show that the log-uniform prior used in all the above publications does not generally induce a proper posterior, and thus Bayesian inference in such models is ill-posed. Independent of the log-uniform prior, the correlated weight noise approximation has further issues leading to either infinite objective or high risk of overfitting. The above implies that the reported sparsity of obtained solutions cannot be explained by Bayesian or the related minimum descrip...
Deep learning tools have gained tremendous attention in applied machine learning. However such tools...
To obtain uncertainty estimates with real-world Bayesian deep learning models, practical inference a...
We provide a rigorous analysis of training by variational inference (VI) of Bayesian neural networks...
Dropout, a stochastic regularisation technique for training of neural networks, has recently been re...
Dropout, a stochastic regularisation technique for training of neural networks, has recently been re...
We reinterpret multiplicative noise in neural networks as auxiliary random variables that augment th...
Variational dropout (VD) is a generalization of Gaussian dropout, which aims at inferring the poster...
Dropout regularization of deep neural networks has been a mysterious yet effective tool to prevent o...
The problem of detecting the Out-of-Distribution (OoD) inputs is of paramount importance for Deep Ne...
Bayesian neural networks (BNNs) hold great promise as a flexible and principled solution to deal wit...
Deep neural networks have bested notable benchmarks across computer vision, reinforcement learning, ...
International audienceWe investigate deep Bayesian neural networks with Gaussian priors on the weigh...
Recent work has attempted to directly approximate the `function-space' or predictive posterior distr...
We investigate deep Bayesian neural networks with Gaussian priors on the weights and ReLU-like nonli...
The results in this thesis are based on applications of the expectation propagation algorithm to app...
Deep learning tools have gained tremendous attention in applied machine learning. However such tools...
To obtain uncertainty estimates with real-world Bayesian deep learning models, practical inference a...
We provide a rigorous analysis of training by variational inference (VI) of Bayesian neural networks...
Dropout, a stochastic regularisation technique for training of neural networks, has recently been re...
Dropout, a stochastic regularisation technique for training of neural networks, has recently been re...
We reinterpret multiplicative noise in neural networks as auxiliary random variables that augment th...
Variational dropout (VD) is a generalization of Gaussian dropout, which aims at inferring the poster...
Dropout regularization of deep neural networks has been a mysterious yet effective tool to prevent o...
The problem of detecting the Out-of-Distribution (OoD) inputs is of paramount importance for Deep Ne...
Bayesian neural networks (BNNs) hold great promise as a flexible and principled solution to deal wit...
Deep neural networks have bested notable benchmarks across computer vision, reinforcement learning, ...
International audienceWe investigate deep Bayesian neural networks with Gaussian priors on the weigh...
Recent work has attempted to directly approximate the `function-space' or predictive posterior distr...
We investigate deep Bayesian neural networks with Gaussian priors on the weights and ReLU-like nonli...
The results in this thesis are based on applications of the expectation propagation algorithm to app...
Deep learning tools have gained tremendous attention in applied machine learning. However such tools...
To obtain uncertainty estimates with real-world Bayesian deep learning models, practical inference a...
We provide a rigorous analysis of training by variational inference (VI) of Bayesian neural networks...