| openaire: EC/H2020/101016775/EU//INTERVENEEncoding domain knowledge into the prior over the high-dimensional weight space of a neural network is challenging but essential in applications with limited data and weak signals. Two types of domain knowledge are commonly available in scientific applications: 1. feature sparsity (fraction of features deemed relevant); 2. signal-to-noise ratio, quantified, for instance, as the proportion of variance explained. We show how to encode both types of domain knowledge into the widely used Gaussian scale mixture priors with Automatic Relevance Determination. Specifically, we propose a new joint prior over the local (i.e., feature-specific) scale parameters that encodes knowledge about feature sparsity, ...
Deep neural networks have recently become astonishingly successful at many machine learning problems...
The paper deals with learning probability distributions of observed data by artificial neural networ...
We investigate deep Bayesian neural networks with Gaussian priors on the weights and ReLU-like nonli...
Encoding domain knowledge into the prior over the high-dimensional weight space of a neural network ...
Deep neural networks have bested notable benchmarks across computer vision, reinforcement learning, ...
We propose a novel approach for nonlinear regression using a two-layer neural network (NN) model str...
Sparse modeling for signal processing and machine learning has been at the focus of scientific resea...
Isotropic Gaussian priors are the de facto standard for modern Bayesian neural network inference. Ho...
The need for function estimation in label-limited settings is common in the natural sciences. At the...
International audienceWe investigate deep Bayesian neural networks with Gaussian priors on the weigh...
In recent years, Neural Networks (NN) have become a popular data-analytic tool in Statistics, Compu...
The Bayesian treatment of neural networks dictates that a prior distribution is specified over their...
Adopting a Bayesian approach and sampling the network parameters from their posterior distribution i...
Stochastic variational inference for Bayesian deep neural network (DNN) requires specifying priors a...
In many problem settings, parameter vectors are not merely sparse, but depen-dent in such a way that...
Deep neural networks have recently become astonishingly successful at many machine learning problems...
The paper deals with learning probability distributions of observed data by artificial neural networ...
We investigate deep Bayesian neural networks with Gaussian priors on the weights and ReLU-like nonli...
Encoding domain knowledge into the prior over the high-dimensional weight space of a neural network ...
Deep neural networks have bested notable benchmarks across computer vision, reinforcement learning, ...
We propose a novel approach for nonlinear regression using a two-layer neural network (NN) model str...
Sparse modeling for signal processing and machine learning has been at the focus of scientific resea...
Isotropic Gaussian priors are the de facto standard for modern Bayesian neural network inference. Ho...
The need for function estimation in label-limited settings is common in the natural sciences. At the...
International audienceWe investigate deep Bayesian neural networks with Gaussian priors on the weigh...
In recent years, Neural Networks (NN) have become a popular data-analytic tool in Statistics, Compu...
The Bayesian treatment of neural networks dictates that a prior distribution is specified over their...
Adopting a Bayesian approach and sampling the network parameters from their posterior distribution i...
Stochastic variational inference for Bayesian deep neural network (DNN) requires specifying priors a...
In many problem settings, parameter vectors are not merely sparse, but depen-dent in such a way that...
Deep neural networks have recently become astonishingly successful at many machine learning problems...
The paper deals with learning probability distributions of observed data by artificial neural networ...
We investigate deep Bayesian neural networks with Gaussian priors on the weights and ReLU-like nonli...