International audienceDeep neural networks of sizes commonly encountered in practice are proven to converge towards a global minimum. The flatness of the surface of the loss function in a neighborhood of such minima is often linked with better generalization performances. In this paper, we present a new model of growing neural network in which we incrementally add neurons throughout the learning phase. We study the characteristics of the minima found by such a network compared to those obtained with standard feedforward neural networks. The results of this analysis show that a neural network grown with our procedure converges towards a flatter minimum than a standard neural network with the same number of parameters learned from scratch. Fu...
We study learning and generalisation ability of a specific two-layer feed-forward neural network and...
The excellent real-world performance of deep neural networks has received increasing attention. Desp...
The highly non-linear nature of deep neural networks causes them to be susceptible to adversarial ex...
The success of deep learning has revealed the application potential of neural networks across the sc...
Learning in deep neural networks takes place by minimizing a nonconvex high-dimensional loss functio...
One of the most important aspects of any machine learning paradigm is how it scales according to pro...
For many reasons, neural networks have become very popular AI machine learning models. Two of the mo...
One of the most important aspects of any machine learning paradigm is how it scales according to pro...
It took until the last decade to finally see a machine match human performance on essentially any ta...
Recent years have seen a growing interest in understanding neural networks from an optimization pers...
Deep neural networks (DNNs) have become a widely deployed model for numerous machine learning applic...
In the last decade or so, deep learning has revolutionized entire domains of machine learning. Neura...
As deep learning has become solution for various machine learning, artificial intelligence applicati...
Recent theoretical results support that decreasing the number of free parameters in a neural network...
Neural network language models (NNLMs) have attracted a lot of attention recently. In this paper, we...
We study learning and generalisation ability of a specific two-layer feed-forward neural network and...
The excellent real-world performance of deep neural networks has received increasing attention. Desp...
The highly non-linear nature of deep neural networks causes them to be susceptible to adversarial ex...
The success of deep learning has revealed the application potential of neural networks across the sc...
Learning in deep neural networks takes place by minimizing a nonconvex high-dimensional loss functio...
One of the most important aspects of any machine learning paradigm is how it scales according to pro...
For many reasons, neural networks have become very popular AI machine learning models. Two of the mo...
One of the most important aspects of any machine learning paradigm is how it scales according to pro...
It took until the last decade to finally see a machine match human performance on essentially any ta...
Recent years have seen a growing interest in understanding neural networks from an optimization pers...
Deep neural networks (DNNs) have become a widely deployed model for numerous machine learning applic...
In the last decade or so, deep learning has revolutionized entire domains of machine learning. Neura...
As deep learning has become solution for various machine learning, artificial intelligence applicati...
Recent theoretical results support that decreasing the number of free parameters in a neural network...
Neural network language models (NNLMs) have attracted a lot of attention recently. In this paper, we...
We study learning and generalisation ability of a specific two-layer feed-forward neural network and...
The excellent real-world performance of deep neural networks has received increasing attention. Desp...
The highly non-linear nature of deep neural networks causes them to be susceptible to adversarial ex...