Neural networks have achieved tremendous success in a large variety of applications. However, their memory footprint and computational demand can render them impractical in application settings with limited hardware or energy resources. In this work, we propose a novel algorithm to find efficient low-rank subnetworks. Remarkably, these subnetworks are determined and adapted already during the training phase and the overall time and memory resources required by both training and evaluating them is significantly reduced. The main idea is to restrict the weight matrices to a low-rank manifold and to update the low-rank factors rather than the full matrix during training. To derive training updates that are restricted to the prescribed manifold...
This thesis presents two nonlinear model reduction methods for systems of equations. One model utili...
We address the scalability issues in low-rank matrix learning problems. Usually these problems resor...
The low-rank matrix completion problem can be solved by Riemannian optimization on a fixed-rank mani...
We propose a novel low-rank initialization framework for training low-rank deep neural networks -- n...
This paper deals with matrix completion when each column vector belongs to a low-dimensional manifol...
Low-rankness plays an important role in traditional machine learning, but is not so popular in deep ...
In this thesis, we consider resource limitations on machine learning algorithms in a variety of sett...
Compressing neural networks is a key step when deploying models for real-time or embedded applicatio...
Neural networks have gained widespread use in many machine learning tasks due to their state-of-the-...
Recently, the deep neural network (DNN) has become one of the most advanced and powerful methods use...
While Deep Neural Networks (DNNs) have achieved tremen-dous success for large vocabulary continuous ...
We propose a scalable framework for the learning of high-dimensional parametric maps via adaptively ...
Exciting new work on generalization bounds for neural networks (NN) given by Bartlett et al. (2017);...
We address the scalability issues in low-rank matrix learning problems. Usually, these problems reso...
In learning with recurrent or very deep feed-forward networks, employing unitary matrices in each la...
This thesis presents two nonlinear model reduction methods for systems of equations. One model utili...
We address the scalability issues in low-rank matrix learning problems. Usually these problems resor...
The low-rank matrix completion problem can be solved by Riemannian optimization on a fixed-rank mani...
We propose a novel low-rank initialization framework for training low-rank deep neural networks -- n...
This paper deals with matrix completion when each column vector belongs to a low-dimensional manifol...
Low-rankness plays an important role in traditional machine learning, but is not so popular in deep ...
In this thesis, we consider resource limitations on machine learning algorithms in a variety of sett...
Compressing neural networks is a key step when deploying models for real-time or embedded applicatio...
Neural networks have gained widespread use in many machine learning tasks due to their state-of-the-...
Recently, the deep neural network (DNN) has become one of the most advanced and powerful methods use...
While Deep Neural Networks (DNNs) have achieved tremen-dous success for large vocabulary continuous ...
We propose a scalable framework for the learning of high-dimensional parametric maps via adaptively ...
Exciting new work on generalization bounds for neural networks (NN) given by Bartlett et al. (2017);...
We address the scalability issues in low-rank matrix learning problems. Usually, these problems reso...
In learning with recurrent or very deep feed-forward networks, employing unitary matrices in each la...
This thesis presents two nonlinear model reduction methods for systems of equations. One model utili...
We address the scalability issues in low-rank matrix learning problems. Usually these problems resor...
The low-rank matrix completion problem can be solved by Riemannian optimization on a fixed-rank mani...