In artificial neural networks, learning from data is a computationally demanding task in which a large number of connection weights are iteratively tuned through stochastic-gradient-based heuristic processes over a cost function. It is not well understood how learning occurs in these systems, in particular how they avoid getting trapped in configurations with poor computational performance. Here, we study the difficult case of networks with discrete weights, where the optimization landscape is very rough even for simple architectures, and provide theoretical and numerical evidence of the existence of rare-but extremely dense and accessible-regions of configurations in the network weight space. We define a measure, the robust ensemble (RE), ...
Stochasticity and limited precision of synaptic weights in neural network models is a key aspect of ...
Stochasticity and limited precision of synaptic weights in neural network models are key aspects of ...
In this thesis, we consider resource limitations on machine learning algorithms in a variety of sett...
In artificial neural networks, learning from data is a computationally demanding task in which a lar...
In artificial neural networks, learning from data is a computationally demanding task in which a lar...
In artificial neural networks, learning from data is a computationally demanding task in which a lar...
We show that discrete synaptic weights can be efficiently used for learning in large scale neural sy...
The success of deep learning has shown impressive empirical breakthroughs, but many theoretical ques...
We consider the algorithmic problem of finding the optimal weights and biases for a two-layer fully ...
Presented as part of the ARC 11 lecture on October 30, 2017 at 10:00 a.m. in the Klaus Advanced Comp...
We show that discrete synaptic weights can be efficiently used for learning in large scale neural sy...
Although machine learning has achieved great success in numerous complicated tasks, many machine lea...
It is well-known that modern neural networks are vulnerable to adversarial examples. To mitigate thi...
In this thesis, we theoretically analyze the ability of neural networks trained by gradient descent ...
We study the dynamics of gradient descent in learning neural networks for classification problems. U...
Stochasticity and limited precision of synaptic weights in neural network models is a key aspect of ...
Stochasticity and limited precision of synaptic weights in neural network models are key aspects of ...
In this thesis, we consider resource limitations on machine learning algorithms in a variety of sett...
In artificial neural networks, learning from data is a computationally demanding task in which a lar...
In artificial neural networks, learning from data is a computationally demanding task in which a lar...
In artificial neural networks, learning from data is a computationally demanding task in which a lar...
We show that discrete synaptic weights can be efficiently used for learning in large scale neural sy...
The success of deep learning has shown impressive empirical breakthroughs, but many theoretical ques...
We consider the algorithmic problem of finding the optimal weights and biases for a two-layer fully ...
Presented as part of the ARC 11 lecture on October 30, 2017 at 10:00 a.m. in the Klaus Advanced Comp...
We show that discrete synaptic weights can be efficiently used for learning in large scale neural sy...
Although machine learning has achieved great success in numerous complicated tasks, many machine lea...
It is well-known that modern neural networks are vulnerable to adversarial examples. To mitigate thi...
In this thesis, we theoretically analyze the ability of neural networks trained by gradient descent ...
We study the dynamics of gradient descent in learning neural networks for classification problems. U...
Stochasticity and limited precision of synaptic weights in neural network models is a key aspect of ...
Stochasticity and limited precision of synaptic weights in neural network models are key aspects of ...
In this thesis, we consider resource limitations on machine learning algorithms in a variety of sett...