We are interested in estimating the uncertainties of deep neural networks, which play an important role in many scientific and engineering problems. In this paper, we present a striking new finding that an ensemble of neural networks with the same weight initialization, trained on datasets that are shifted by a constant bias gives rise to slightly inconsistent trained models, where the differences in predictions are a strong indicator of epistemic uncertainties. Using the neural tangent kernel (NTK), we demonstrate that this phenomena occurs in part because the NTK is not shift-invariant. Since this is achieved via a trivial input transformation, we show that it can therefore be approximated using just a single neural network -- using a tec...
Existing methods for estimating uncertainty in deep learning tend to require multiple forward passes...
Since their inception, machine learning methods have proven useful, and their usability continues to...
The Delta method is a classical procedure for quantifying epistemic uncertainty in statistical model...
This paper proposes a fast and scalable method for uncertainty quantification of machine learning mo...
Uncertainty quantification (UQ) is important for reliability assessment and enhancement of machine l...
Although neural networks are powerful function approximators, the underlying modelling assumptions u...
Considering uncertainty estimation of modern neural networks (NNs) is one of the most important ste...
Uncertainty estimation (UE) techniques -- such as the Gaussian process (GP), Bayesian neural network...
Whereas the ability of deep networks to produce useful predictions on many kinds of data has been am...
Uncertainty estimation methods using deep learning approaches strive against separating how uncertai...
Neural networks predictions are unreliable when the input sample is out of the training distribution...
Intelligence relies on an agent's knowledge of what it does not know. This capability can be assesse...
Uncertainty estimation for machine learning models is of high importance in many scenarios such as c...
Designing uncertainty-aware deep learning models which are able to provide reasonable uncertainties ...
The breakout success of deep neural networks (NNs) in the 2010's marked a new era in the quest to bu...
Existing methods for estimating uncertainty in deep learning tend to require multiple forward passes...
Since their inception, machine learning methods have proven useful, and their usability continues to...
The Delta method is a classical procedure for quantifying epistemic uncertainty in statistical model...
This paper proposes a fast and scalable method for uncertainty quantification of machine learning mo...
Uncertainty quantification (UQ) is important for reliability assessment and enhancement of machine l...
Although neural networks are powerful function approximators, the underlying modelling assumptions u...
Considering uncertainty estimation of modern neural networks (NNs) is one of the most important ste...
Uncertainty estimation (UE) techniques -- such as the Gaussian process (GP), Bayesian neural network...
Whereas the ability of deep networks to produce useful predictions on many kinds of data has been am...
Uncertainty estimation methods using deep learning approaches strive against separating how uncertai...
Neural networks predictions are unreliable when the input sample is out of the training distribution...
Intelligence relies on an agent's knowledge of what it does not know. This capability can be assesse...
Uncertainty estimation for machine learning models is of high importance in many scenarios such as c...
Designing uncertainty-aware deep learning models which are able to provide reasonable uncertainties ...
The breakout success of deep neural networks (NNs) in the 2010's marked a new era in the quest to bu...
Existing methods for estimating uncertainty in deep learning tend to require multiple forward passes...
Since their inception, machine learning methods have proven useful, and their usability continues to...
The Delta method is a classical procedure for quantifying epistemic uncertainty in statistical model...