We show that the output of a (residual) convolutional neural network (CNN) with an appropriate prior over the weights and biases is a Gaussian process (GP) in the limit of infinitely many convolutional filters, extending similar results for dense networks. For a CNN, the equivalent kernel can be computed exactly and, unlike "deep kernels", has very few parameters: only the hyperparameters of the original CNN. Further, we show that this kernel has two properties that allow it to be computed efficiently; the cost of evaluating the kernel for a pair of images is similar to a single forward pass through the original CNN with only one filter per layer. The kernel equivalent to a 32-layer ResNet obtains 0.84% classification error on MNIST, a new ...
We provide approximation guarantees for a linear-time inferential framework for Gaussian processes, ...
Over-parameterized residual networks (ResNets) are amongst the most successful convolutional neural ...
The successes of modern deep neural networks (DNNs) are founded on their ability to transform inputs...
We show that the output of a (residual) convolutional neural network (CNN) with an appropriate prior...
Choosing appropriate architectures and regular-ization strategies for deep networks is crucial to go...
Recent years have witnessed an increasing interest in the correspondence between infinitely wide net...
This thesis aims to study recent theoretical work in machine learning research that seeks to better ...
Analysing and computing with Gaussian processes arising from infinitely wide neural networks has rec...
This article studies the infinite-width limit of deep feedforward neural networks whose weights are ...
Many modern machine learning methods, including deep neural networks, utilize a discrete sequence of...
Gaussian process [1] and it’s variants of deep structures like deep gaussian processes [2] and convo...
Deep learning has been widely applied and brought breakthroughs in speech recognition, computer visi...
© 2018 Matthew M. Dunlop, Mark A. Girolami, Andrew M. Stuart and Aretha L. Teckentrup. Recent resear...
Gaussian Processes (GPs) are an attractive specific way of doing non-parametric Bayesian modeling in...
We present a practical way of introducing convolutional structure into Gaussian processes, making th...
We provide approximation guarantees for a linear-time inferential framework for Gaussian processes, ...
Over-parameterized residual networks (ResNets) are amongst the most successful convolutional neural ...
The successes of modern deep neural networks (DNNs) are founded on their ability to transform inputs...
We show that the output of a (residual) convolutional neural network (CNN) with an appropriate prior...
Choosing appropriate architectures and regular-ization strategies for deep networks is crucial to go...
Recent years have witnessed an increasing interest in the correspondence between infinitely wide net...
This thesis aims to study recent theoretical work in machine learning research that seeks to better ...
Analysing and computing with Gaussian processes arising from infinitely wide neural networks has rec...
This article studies the infinite-width limit of deep feedforward neural networks whose weights are ...
Many modern machine learning methods, including deep neural networks, utilize a discrete sequence of...
Gaussian process [1] and it’s variants of deep structures like deep gaussian processes [2] and convo...
Deep learning has been widely applied and brought breakthroughs in speech recognition, computer visi...
© 2018 Matthew M. Dunlop, Mark A. Girolami, Andrew M. Stuart and Aretha L. Teckentrup. Recent resear...
Gaussian Processes (GPs) are an attractive specific way of doing non-parametric Bayesian modeling in...
We present a practical way of introducing convolutional structure into Gaussian processes, making th...
We provide approximation guarantees for a linear-time inferential framework for Gaussian processes, ...
Over-parameterized residual networks (ResNets) are amongst the most successful convolutional neural ...
The successes of modern deep neural networks (DNNs) are founded on their ability to transform inputs...