The cross-entropy softmax loss is the primary loss function used to train deep neural networks. On the other hand, the focal loss function has been demonstrated to provide improved performance when there is an imbalance in the number of training samples in each class, such as in long-tailed datasets. In this paper, we introduce a novel cyclical focal loss and demonstrate that it is a more universal loss function than cross-entropy softmax loss or focal loss. We describe the intuition behind the cyclical focal loss and our experiments provide evidence that cyclical focal loss provides superior performance for balanced, imbalanced, or long-tailed datasets. We provide numerous experimental results for CIFAR-10/CIFAR-100, ImageNet, balanced and...
Learning in deep neural networks takes place by minimizing a nonconvex high-dimensional loss functio...
Automatic segmentation methods are an important advancement in medical image analysis. Machine learn...
Cross-entropy loss and focal loss are the most common choices when training deep neural networks for...
While cross entropy (CE) is the most commonly used loss to train deep neural networks for classifica...
Deep convolutional neural networks (CNNs) trained with logistic and softmax losses have made signifi...
The categorical cross-entropy loss is shown for the convolutional neural network training as a funct...
Miscalibration -- a mismatch between a model's confidence and its correctness -- of Deep Neural Netw...
The recently discovered Neural Collapse (NC) phenomenon occurs pervasively in today's deep net train...
Automatic segmentation methods are an important advancement in medical image analysis. Machine learn...
Deep learning techniques have become the tool of choice for side-channel analysis. In recent years, ...
Deep learning has been shown to achieve impressive results in several domains like computer vision a...
Convolutional Neural Networks (CNNs) have shown great power in various classification tasks and have...
Miscalibration – a mismatch between a model’s confidence and its correctness – of Deep Neural Networ...
The top-$k$ error is a common measure of performance in machine learning and computer vision. In pra...
This paper describes the principle of "General Cyclical Training" in machine learning, where trainin...
Learning in deep neural networks takes place by minimizing a nonconvex high-dimensional loss functio...
Automatic segmentation methods are an important advancement in medical image analysis. Machine learn...
Cross-entropy loss and focal loss are the most common choices when training deep neural networks for...
While cross entropy (CE) is the most commonly used loss to train deep neural networks for classifica...
Deep convolutional neural networks (CNNs) trained with logistic and softmax losses have made signifi...
The categorical cross-entropy loss is shown for the convolutional neural network training as a funct...
Miscalibration -- a mismatch between a model's confidence and its correctness -- of Deep Neural Netw...
The recently discovered Neural Collapse (NC) phenomenon occurs pervasively in today's deep net train...
Automatic segmentation methods are an important advancement in medical image analysis. Machine learn...
Deep learning techniques have become the tool of choice for side-channel analysis. In recent years, ...
Deep learning has been shown to achieve impressive results in several domains like computer vision a...
Convolutional Neural Networks (CNNs) have shown great power in various classification tasks and have...
Miscalibration – a mismatch between a model’s confidence and its correctness – of Deep Neural Networ...
The top-$k$ error is a common measure of performance in machine learning and computer vision. In pra...
This paper describes the principle of "General Cyclical Training" in machine learning, where trainin...
Learning in deep neural networks takes place by minimizing a nonconvex high-dimensional loss functio...
Automatic segmentation methods are an important advancement in medical image analysis. Machine learn...
Cross-entropy loss and focal loss are the most common choices when training deep neural networks for...