Artificial neural networks are at the heart of some of the greatest advances in modern technology. They enable huge breakthroughs in applications ranging from computer vision via machine translation to speech recognition as well as autonomous driving and many more. However, we are still far away from a more rigorous theoretical explanation of these overwhelming success stories. Consequently, the development of a better mathematical understanding of neural networks is currently one of the hottest research topics in computer science. In this thesis we provide several contributions in that direction for the simple, but practically powerful and widely used model of feedforward neural networks with rectified linear unit (ReLU) activations. Our ...
AbstractThis paper shows that neural networks which use continuous activation functions have VC dime...
AbstractThis paper is primarily oriented towards discrete mathematics and emphasizes the occurrence ...
The computational power of neural networks depends on properties of the real numbers used as weights...
Understanding the computational complexity of training simple neural networks with rectified linear ...
We contribute to a better understanding of the class of functions that can be represented by a neura...
It is well-known that neural networks are computationally hard to train. On the other hand, in pract...
We consider the algorithmic problem of finding the optimal weights and biases for a two-layer fully ...
Neural networks (NNs) have seen a surge in popularity due to their unprecedented practical success i...
We contribute to a better understanding of the class of functions that is represented by a neural ne...
Abstract. It is shown that high-order feedforward neural nets of constant depth with piecewise-polyn...
) Wolfgang Maass* Institute for Theoretical Computer Science Technische Universitaet Graz Klosterwie...
In the past decade, deep learning became the prevalent methodology for predictive modeling thanks to...
AbstractThis paper deals with a neural network model in which each neuron performs a threshold logic...
We survey some relationships between computational complexity and neural network theory. Here, only ...
This paper deals with a neural network model in which each neuron performs a threshold logic functio...
AbstractThis paper shows that neural networks which use continuous activation functions have VC dime...
AbstractThis paper is primarily oriented towards discrete mathematics and emphasizes the occurrence ...
The computational power of neural networks depends on properties of the real numbers used as weights...
Understanding the computational complexity of training simple neural networks with rectified linear ...
We contribute to a better understanding of the class of functions that can be represented by a neura...
It is well-known that neural networks are computationally hard to train. On the other hand, in pract...
We consider the algorithmic problem of finding the optimal weights and biases for a two-layer fully ...
Neural networks (NNs) have seen a surge in popularity due to their unprecedented practical success i...
We contribute to a better understanding of the class of functions that is represented by a neural ne...
Abstract. It is shown that high-order feedforward neural nets of constant depth with piecewise-polyn...
) Wolfgang Maass* Institute for Theoretical Computer Science Technische Universitaet Graz Klosterwie...
In the past decade, deep learning became the prevalent methodology for predictive modeling thanks to...
AbstractThis paper deals with a neural network model in which each neuron performs a threshold logic...
We survey some relationships between computational complexity and neural network theory. Here, only ...
This paper deals with a neural network model in which each neuron performs a threshold logic functio...
AbstractThis paper shows that neural networks which use continuous activation functions have VC dime...
AbstractThis paper is primarily oriented towards discrete mathematics and emphasizes the occurrence ...
The computational power of neural networks depends on properties of the real numbers used as weights...