We explore unique considerations involved in fitting machine learning (ML) models to data with very high precision, as is often required for science applications. We empirically compare various function approximation methods and study how they scale with increasing parameters and data. We find that neural networks (NNs) can often outperform classical approximation methods on high-dimensional examples, by (we hypothesize) auto-discovering and exploiting modular structures therein. However, neural networks trained with common optimizers are less powerful for low-dimensional cases, which motivates us to study the unique properties of neural network loss landscapes and the corresponding optimization challenges that arise in the high precision r...
Machine learning has been a computer sciences buzzword for years. The technology has a lot of potent...
The understanding of generalization in machine learning is in a state of flux. This is partly due to...
In this article, we develop a framework for showing that neural networks can overcome the curse of d...
While machine learning is traditionally a resource intensive task, embedded systems, autonomous navi...
Approximation of high-dimensional functions is a challenge for neural networks due to the curse of d...
The acclaimed successes of neural networks often overshadow their tremendous complexity. We focus on...
The success of deep learning has shown impressive empirical breakthroughs, but many theoretical ques...
Modern learning problems in nature language processing, computer vision, computational biology, etc....
Finding useful representations of data in order to facilitate scientific knowledge generation is a u...
The success of deep learning has revealed the application potential of neural networks across the sc...
Neural networks provide a more flexible approximation of functions than traditional linear regressio...
For many reasons, neural networks have become very popular AI machine learning models. Two of the mo...
International audienceDeep Neural Networks (DNN) represent a performance-hungry application. Floatin...
The remarkable practical success of deep learning has revealed some major surprises from a theoretic...
Training deep neural networks requires huge amounts of data. The next generation of intelligent syst...
Machine learning has been a computer sciences buzzword for years. The technology has a lot of potent...
The understanding of generalization in machine learning is in a state of flux. This is partly due to...
In this article, we develop a framework for showing that neural networks can overcome the curse of d...
While machine learning is traditionally a resource intensive task, embedded systems, autonomous navi...
Approximation of high-dimensional functions is a challenge for neural networks due to the curse of d...
The acclaimed successes of neural networks often overshadow their tremendous complexity. We focus on...
The success of deep learning has shown impressive empirical breakthroughs, but many theoretical ques...
Modern learning problems in nature language processing, computer vision, computational biology, etc....
Finding useful representations of data in order to facilitate scientific knowledge generation is a u...
The success of deep learning has revealed the application potential of neural networks across the sc...
Neural networks provide a more flexible approximation of functions than traditional linear regressio...
For many reasons, neural networks have become very popular AI machine learning models. Two of the mo...
International audienceDeep Neural Networks (DNN) represent a performance-hungry application. Floatin...
The remarkable practical success of deep learning has revealed some major surprises from a theoretic...
Training deep neural networks requires huge amounts of data. The next generation of intelligent syst...
Machine learning has been a computer sciences buzzword for years. The technology has a lot of potent...
The understanding of generalization in machine learning is in a state of flux. This is partly due to...
In this article, we develop a framework for showing that neural networks can overcome the curse of d...