International audienceThe connection between Bayesian neural networks and Gaussian processes gained a lot of attention in the last few years. Hidden units are proven to follow a Gaussian process limit when the layer width tends to infinity. Recent work has suggested that finite Bayesian neural networks may outperform their infinite counterparts because they adapt their internal representations flexibly. To establish solid ground for future research on finite-width neural networks, our goal is to study the prior induced on hidden units. Our main result is an accurate description of hidden units tails which shows that unit priors become heavier-tailed going deeper, thanks to the introduced notion of generalized Weibull-tail. This finding shed...
Understanding capabilities and limitations of different network architectures is of fundamental impo...
Existing Bayesian treatments of neural networks are typically characterized by weak prior and approx...
This article studies the infinite-width limit of deep feedforward neural networks whose weights are ...
International audienceThe connection between Bayesian neural networks and Gaussian processes gained ...
International audienceThe connection between Bayesian neural networks and Gaussian processes gained ...
International audienceWe investigate deep Bayesian neural networks with Gaussian priors on the weigh...
10 pages, 5 figures, ICML'19 conferenceInternational audienceWe investigate deep Bayesian neural net...
We investigate deep Bayesian neural networks with Gaussian priors on the weights and ReLU-like nonli...
International audienceWe investigate deep Bayesian neural networks with Gaussian priors on the weigh...
Bayesian neural networks are theoretically well-understood only in the infinite-width limit, where G...
Deep neural networks have bested notable benchmarks across computer vision, reinforcement learning, ...
Recent works have suggested that finite Bayesian neural networks may sometimes outperform their infi...
Isotropic Gaussian priors are the de facto standard for modern Bayesian neural network inference. Ho...
The Bayesian treatment of neural networks dictates that a prior distribution is specified over their...
Les réseaux neuronaux (RN) sont des outils efficaces qui atteignent des performances de pointe dans ...
Understanding capabilities and limitations of different network architectures is of fundamental impo...
Existing Bayesian treatments of neural networks are typically characterized by weak prior and approx...
This article studies the infinite-width limit of deep feedforward neural networks whose weights are ...
International audienceThe connection between Bayesian neural networks and Gaussian processes gained ...
International audienceThe connection between Bayesian neural networks and Gaussian processes gained ...
International audienceWe investigate deep Bayesian neural networks with Gaussian priors on the weigh...
10 pages, 5 figures, ICML'19 conferenceInternational audienceWe investigate deep Bayesian neural net...
We investigate deep Bayesian neural networks with Gaussian priors on the weights and ReLU-like nonli...
International audienceWe investigate deep Bayesian neural networks with Gaussian priors on the weigh...
Bayesian neural networks are theoretically well-understood only in the infinite-width limit, where G...
Deep neural networks have bested notable benchmarks across computer vision, reinforcement learning, ...
Recent works have suggested that finite Bayesian neural networks may sometimes outperform their infi...
Isotropic Gaussian priors are the de facto standard for modern Bayesian neural network inference. Ho...
The Bayesian treatment of neural networks dictates that a prior distribution is specified over their...
Les réseaux neuronaux (RN) sont des outils efficaces qui atteignent des performances de pointe dans ...
Understanding capabilities and limitations of different network architectures is of fundamental impo...
Existing Bayesian treatments of neural networks are typically characterized by weak prior and approx...
This article studies the infinite-width limit of deep feedforward neural networks whose weights are ...