In this paper, we propose systematic and efficient gradient-based methods for both one-way and two-way partial AUC (pAUC) maximization that are applicable to deep learning. We propose new formulations of pAUC surrogate objectives by using the distributionally robust optimization (DRO) to define the loss for each individual positive data. We consider two formulations of DRO, one of which is based on conditional-value-at-risk (CVaR) that yields a non-smooth but exact estimator for pAUC, and another one is based on a KL divergence regularized DRO that yields an inexact but smooth (soft) estimator for pAUC. For both one-way and two-way pAUC maximization, we propose two algorithms and prove their convergence for optimizing their two formulations...
How to train deep neural networks (DNNs) to generalize well is a central concern in deep learning, e...
Nonconvex min-max optimization receives increasing attention in modern machine learning, especially ...
Despite superior performance in many situations, deep neural networks are often vulnerable to advers...
The area under the ROC curve (AUC) is one of the most widely used performance measures for classific...
The area under the ROC curve (AUROC) has been vigorously applied for imbalanced classification and m...
Learning to improve AUC performance is an important topic in machine learning. However, AUC maximiza...
Distributionally robust optimization (DRO) has shown lot of promise in providing robustness in learn...
In the past decade, neural networks have demonstrated impressive performance in supervised learning....
Federated optimization (FedOpt), which targets at collaboratively training a learning model across a...
The success of deep learning has shown impressive empirical breakthroughs, but many theoretical ques...
Area Under the ROC Curve (AUC) is a widely used ranking metric in imbalanced learning due to its ins...
Area under the ROC curve, a.k.a. AUC, is a measure of choice for assessing the performance of a clas...
The remarkable practical success of deep learning has revealed some major surprises from a theoretic...
A central problem in statistical learning is to design prediction algorithms that not only perform w...
Robustness of machine learning, often referring to securing performance on different data, is always...
How to train deep neural networks (DNNs) to generalize well is a central concern in deep learning, e...
Nonconvex min-max optimization receives increasing attention in modern machine learning, especially ...
Despite superior performance in many situations, deep neural networks are often vulnerable to advers...
The area under the ROC curve (AUC) is one of the most widely used performance measures for classific...
The area under the ROC curve (AUROC) has been vigorously applied for imbalanced classification and m...
Learning to improve AUC performance is an important topic in machine learning. However, AUC maximiza...
Distributionally robust optimization (DRO) has shown lot of promise in providing robustness in learn...
In the past decade, neural networks have demonstrated impressive performance in supervised learning....
Federated optimization (FedOpt), which targets at collaboratively training a learning model across a...
The success of deep learning has shown impressive empirical breakthroughs, but many theoretical ques...
Area Under the ROC Curve (AUC) is a widely used ranking metric in imbalanced learning due to its ins...
Area under the ROC curve, a.k.a. AUC, is a measure of choice for assessing the performance of a clas...
The remarkable practical success of deep learning has revealed some major surprises from a theoretic...
A central problem in statistical learning is to design prediction algorithms that not only perform w...
Robustness of machine learning, often referring to securing performance on different data, is always...
How to train deep neural networks (DNNs) to generalize well is a central concern in deep learning, e...
Nonconvex min-max optimization receives increasing attention in modern machine learning, especially ...
Despite superior performance in many situations, deep neural networks are often vulnerable to advers...