Miscalibration -- a mismatch between a model's confidence and its correctness -- of Deep Neural Networks (DNNs) makes their predictions hard to rely on. Ideally, we want networks to be accurate, calibrated and confident. We show that, as opposed to the standard cross-entropy loss, focal loss (Lin et al., 2017) allows us to learn models that are already very well calibrated. When combined with temperature scaling, whilst preserving accuracy, it yields state-of-the-art calibrated models. We provide a thorough analysis of the factors causing miscalibration, and use the insights we glean from this to justify the empirically excellent performance of focal loss. To facilitate the use of focal loss in practice, we also provide a principled approac...
Recent advances in deep learning have pushed the performances of visual saliency models way further ...
There is no such thing as a perfect dataset. In some datasets, deep neural networks discover underly...
Communicating the predictive uncertainty of deep neural networks transparently and reliably is impor...
Miscalibration – a mismatch between a model’s confidence and its correctness – of Deep Neural Networ...
As deep learning classifiers become ever more widely deployed for medical image analysis tasks, issu...
Much recent work has been devoted to the problem of ensuring that a neural network's confidence scor...
Neural networks have been widely studied and used in recent years due to its highclassification accu...
The cross-entropy softmax loss is the primary loss function used to train deep neural networks. On t...
In spite of the dominant performances of deep neural networks, recent works have shown that they are...
Deep neural networks (DNNs) have made great strides in pushing the state-of-the-art in several chall...
Calibrating deep neural models plays an important role in building reliable, robust AI systems in sa...
It is now well known that neural networks can be wrong with high confidence in their predictions, le...
We address the problem of network calibration adjusting miscalibrated confidences of deep neural net...
Abstract In this paper, we explore degrees of freedom in deep sigmoidal neural networks. We show tha...
Deep neural networks have been shown to be highly miscalibrated. often they tend to be overconfident...
Recent advances in deep learning have pushed the performances of visual saliency models way further ...
There is no such thing as a perfect dataset. In some datasets, deep neural networks discover underly...
Communicating the predictive uncertainty of deep neural networks transparently and reliably is impor...
Miscalibration – a mismatch between a model’s confidence and its correctness – of Deep Neural Networ...
As deep learning classifiers become ever more widely deployed for medical image analysis tasks, issu...
Much recent work has been devoted to the problem of ensuring that a neural network's confidence scor...
Neural networks have been widely studied and used in recent years due to its highclassification accu...
The cross-entropy softmax loss is the primary loss function used to train deep neural networks. On t...
In spite of the dominant performances of deep neural networks, recent works have shown that they are...
Deep neural networks (DNNs) have made great strides in pushing the state-of-the-art in several chall...
Calibrating deep neural models plays an important role in building reliable, robust AI systems in sa...
It is now well known that neural networks can be wrong with high confidence in their predictions, le...
We address the problem of network calibration adjusting miscalibrated confidences of deep neural net...
Abstract In this paper, we explore degrees of freedom in deep sigmoidal neural networks. We show tha...
Deep neural networks have been shown to be highly miscalibrated. often they tend to be overconfident...
Recent advances in deep learning have pushed the performances of visual saliency models way further ...
There is no such thing as a perfect dataset. In some datasets, deep neural networks discover underly...
Communicating the predictive uncertainty of deep neural networks transparently and reliably is impor...