Choosing appropriate architectures and regularization strategies of deep networks is crucial to good predictive performance. To shed light on this problem, we analyze the analogous problem of constructing useful priors on compositions of functions. Specifically, we study the deep Gaussian process, a type of infinitely-wide, deep neural network. We show that in standard architectures, the representational capacity of the network tends to capture fewer degrees of freedom as the number of layers increases, retaining only a single degree of freedom in the limit. We propose an alternate network architecture which does not suffer from this pathology. We also examine deep covariance functions, obtained by composing infinitely many feature transfor...
stitute two of the most important foci of modern machine learning research. In this preliminary work...
© 2018 Matthew M. Dunlop, Mark A. Girolami, Andrew M. Stuart and Aretha L. Teckentrup. Recent resear...
Khan MEE, Immer A, Abedi E, Korzepa M. Approximate Inference Turns Deep Networks into Gaussian Proce...
Choosing appropriate architectures and regularization strategies of deep networks is crucial to good...
We show that the output of a (residual) convolutional neural network (CNN) with an appropriate prior...
We show that the output of a (residual) convolutional neural network (CNN) with an appropriate prior...
Recent years have witnessed an increasing interest in the correspondence between infinitely wide net...
Many modern machine learning methods, including deep neural networks, utilize a discrete sequence of...
In this paper we introduce deep Gaussian process (GP) models. Deep GPs are a deep belief net-work ba...
Understanding capabilities and limitations of different network architectures is of fundamental impo...
This manuscript considers the problem of learning a random Gaussian network function using a fully c...
In this paper we introduce deep Gaussian process (GP) models. Deep GPs are a deep belief network bas...
This article studies the infinite-width limit of deep feedforward neural networks whose weights are ...
We investigate deep Bayesian neural networks with Gaussian priors on the weights and ReLU-like nonli...
We propose a scalable Gaussian process model for regression by applying a deep neural network as the...
stitute two of the most important foci of modern machine learning research. In this preliminary work...
© 2018 Matthew M. Dunlop, Mark A. Girolami, Andrew M. Stuart and Aretha L. Teckentrup. Recent resear...
Khan MEE, Immer A, Abedi E, Korzepa M. Approximate Inference Turns Deep Networks into Gaussian Proce...
Choosing appropriate architectures and regularization strategies of deep networks is crucial to good...
We show that the output of a (residual) convolutional neural network (CNN) with an appropriate prior...
We show that the output of a (residual) convolutional neural network (CNN) with an appropriate prior...
Recent years have witnessed an increasing interest in the correspondence between infinitely wide net...
Many modern machine learning methods, including deep neural networks, utilize a discrete sequence of...
In this paper we introduce deep Gaussian process (GP) models. Deep GPs are a deep belief net-work ba...
Understanding capabilities and limitations of different network architectures is of fundamental impo...
This manuscript considers the problem of learning a random Gaussian network function using a fully c...
In this paper we introduce deep Gaussian process (GP) models. Deep GPs are a deep belief network bas...
This article studies the infinite-width limit of deep feedforward neural networks whose weights are ...
We investigate deep Bayesian neural networks with Gaussian priors on the weights and ReLU-like nonli...
We propose a scalable Gaussian process model for regression by applying a deep neural network as the...
stitute two of the most important foci of modern machine learning research. In this preliminary work...
© 2018 Matthew M. Dunlop, Mark A. Girolami, Andrew M. Stuart and Aretha L. Teckentrup. Recent resear...
Khan MEE, Immer A, Abedi E, Korzepa M. Approximate Inference Turns Deep Networks into Gaussian Proce...