In this paper we introduce a novel neural network architecture, in which weight matrices are re-parametrized in terms of low-dimensional vectors, interacting through kernel functions. A layer of our network can be interpreted as introducing a (potentially infinitely wide) linear layer between input and output. We describe the theory underpinning this model and validate it with concrete examples, exploring how it can be used to impose structure on neural networks in diverse applications ranging from data visualization to recommender systems. We achieve state-of-the-art performance in a collaborative filtering task (MovieLens)
Weight modifications in traditional neural nets are computed by hard-wired algorithms. Without excep...
Matrix completion problems arise in many applications including recommendation systems, computer vis...
hertz norditadk It has been observed in numerical simulations that a weight decay can im prove gener...
In this paper we introduce a novel neural network architecture, in which weight matrices are re-para...
In this work, we suggest Kernel Filtering Linear Overparameterization (KFLO), where a linear cascade...
There has been a recent revolution in machine learning based on the following simple idea. Instead o...
The vast majority of advances in deep neural network research operate on the basis of a real-valued ...
International audienceNeural networks have not been widely studied in Collaborative Filtering. For i...
The importance of weight initialization when building a deep learning model is often underappreciate...
Abstract We present weight normalization: a reparameterization of the weight vectors in a neural net...
The weight matrix (WM) of a neural network (NN) is its program. The programs of many traditional NNs...
Deep Learning architectures in which neural layers alternate with mappings to infinitedimensional fe...
We propose a new indirect encoding scheme for neural net-works in which the weight matrices are repr...
The aim of this paper is to introduce two widely applicable regularization methods based on the dire...
We introduce a new family of positive-definite kernels that mimic the computation in large neural ne...
Weight modifications in traditional neural nets are computed by hard-wired algorithms. Without excep...
Matrix completion problems arise in many applications including recommendation systems, computer vis...
hertz norditadk It has been observed in numerical simulations that a weight decay can im prove gener...
In this paper we introduce a novel neural network architecture, in which weight matrices are re-para...
In this work, we suggest Kernel Filtering Linear Overparameterization (KFLO), where a linear cascade...
There has been a recent revolution in machine learning based on the following simple idea. Instead o...
The vast majority of advances in deep neural network research operate on the basis of a real-valued ...
International audienceNeural networks have not been widely studied in Collaborative Filtering. For i...
The importance of weight initialization when building a deep learning model is often underappreciate...
Abstract We present weight normalization: a reparameterization of the weight vectors in a neural net...
The weight matrix (WM) of a neural network (NN) is its program. The programs of many traditional NNs...
Deep Learning architectures in which neural layers alternate with mappings to infinitedimensional fe...
We propose a new indirect encoding scheme for neural net-works in which the weight matrices are repr...
The aim of this paper is to introduce two widely applicable regularization methods based on the dire...
We introduce a new family of positive-definite kernels that mimic the computation in large neural ne...
Weight modifications in traditional neural nets are computed by hard-wired algorithms. Without excep...
Matrix completion problems arise in many applications including recommendation systems, computer vis...
hertz norditadk It has been observed in numerical simulations that a weight decay can im prove gener...