One of the most common metrics to evaluate neural network classifiers is the area under the receiver operating characteristic curve (AUC). However, optimisation of the AUC as the loss function during network training is not a standard procedure. Here we compare minimising the cross-entropy (CE) loss and optimising the AUC directly. In particular, we analyse the loss function landscape (LFL) of approximate AUC (appAUC) loss functions to discover the organisation of this solution space. We discuss various surrogates for AUC approximation and show their differences. We find that the characteristics of the appAUC landscape are significantly different from the CE landscape. The approximate AUC loss function improves testing AUC, and the...
Learning in deep neural networks takes place by minimizing a nonconvex high-dimensional loss functio...
Learning in deep neural networks takes place by minimizing a nonconvex high-dimensional loss functio...
Learning in deep neural networks takes place by minimizing a nonconvex high-dimensional loss functio...
Abstract We investigate the structure of the loss function landscape for neural netwo...
Abstract: We investigate the structure of the loss function landscape for neural networks subject to...
Abstract: We investigate the structure of the loss function landscape for neural networks subject to...
While cross entropy (CE) is the most commonly used loss to train deep neural networks for classifica...
Quantification of the stationary points and the associated basins of attraction of neural network lo...
Training an artificial neural network involves an optimization process over the landscape defined by...
In this paper, we try to understand the concept of ’Neural Collapse’ from a mathemati-cal point of v...
In this paper, we try to understand the concept of ’Neural Collapse’ from a mathemati-cal point of v...
In this paper, we try to understand the concept of ’Neural Collapse’ from a mathemati-cal point of v...
Deep learning techniques have become the tool of choice for side-channel analysis. In recent years, ...
The Area Under the ROC Curve (AUC) is an important model metric for evaluating binary classifiers, a...
Learning in deep neural networks takes place by minimizing a nonconvex high-dimensional loss functio...
Learning in deep neural networks takes place by minimizing a nonconvex high-dimensional loss functio...
Learning in deep neural networks takes place by minimizing a nonconvex high-dimensional loss functio...
Learning in deep neural networks takes place by minimizing a nonconvex high-dimensional loss functio...
Abstract We investigate the structure of the loss function landscape for neural netwo...
Abstract: We investigate the structure of the loss function landscape for neural networks subject to...
Abstract: We investigate the structure of the loss function landscape for neural networks subject to...
While cross entropy (CE) is the most commonly used loss to train deep neural networks for classifica...
Quantification of the stationary points and the associated basins of attraction of neural network lo...
Training an artificial neural network involves an optimization process over the landscape defined by...
In this paper, we try to understand the concept of ’Neural Collapse’ from a mathemati-cal point of v...
In this paper, we try to understand the concept of ’Neural Collapse’ from a mathemati-cal point of v...
In this paper, we try to understand the concept of ’Neural Collapse’ from a mathemati-cal point of v...
Deep learning techniques have become the tool of choice for side-channel analysis. In recent years, ...
The Area Under the ROC Curve (AUC) is an important model metric for evaluating binary classifiers, a...
Learning in deep neural networks takes place by minimizing a nonconvex high-dimensional loss functio...
Learning in deep neural networks takes place by minimizing a nonconvex high-dimensional loss functio...
Learning in deep neural networks takes place by minimizing a nonconvex high-dimensional loss functio...
Learning in deep neural networks takes place by minimizing a nonconvex high-dimensional loss functio...