Preserving the performance of a trained model while removing unique characteristics of marked training data points is challenging. Recent research usually suggests retraining a model from scratch with remaining training data or refining the model by reverting the model optimization on the marked data points. Unfortunately, aside from their computational inefficiency, those approaches inevitably hurt the resulting model's generalization ability since they remove not only unique characteristics but also discard shared (and possibly contributive) information. To address the performance degradation problem, this paper presents a novel approach called Performance Unchanged Model Augmentation (PUMA). The proposed PUMA framework explicitly models ...
In this work, we propose ModelPred, a framework that helps to understand the impact of changes in tr...
While machine learning systems are known to be vulnerable to data-manipulation attacks at both train...
Machine learning models are vulnerable to evasion attacks, where the attacker starts from a correctl...
To expand the size of a real dataset, data augmentation techniques artificially create various versi...
The great success of deep learning heavily relies on increasingly larger training data, which comes ...
Good data stewardship requires removal of data at the request of the data's owner. This raises the q...
We introduce an exploratory study on Mutation Validation (MV), a model validation method using mutat...
One of the fundamental assumptions of machine learning is that learnt models are applied to data th...
Deep learning models are achieving remarkable performance on numerous tasks across various fields an...
A large body of research has shown that machine learning models are vulnerable to membership inferen...
© 2019, Springer Nature Switzerland AG. The parameters of any machine learning (ML) model are obtain...
Scenarios in which restrictions in data transfer and storage limit the possibility to compose a sing...
In addition to high accuracy, robustness is becoming increasingly important for machine learning mod...
Machine learning (ML) has established itself as a cornerstone for various critical applications rang...
Machine Learning today plays a vital role in a wide range of critical applications. To ensure ML mod...
In this work, we propose ModelPred, a framework that helps to understand the impact of changes in tr...
While machine learning systems are known to be vulnerable to data-manipulation attacks at both train...
Machine learning models are vulnerable to evasion attacks, where the attacker starts from a correctl...
To expand the size of a real dataset, data augmentation techniques artificially create various versi...
The great success of deep learning heavily relies on increasingly larger training data, which comes ...
Good data stewardship requires removal of data at the request of the data's owner. This raises the q...
We introduce an exploratory study on Mutation Validation (MV), a model validation method using mutat...
One of the fundamental assumptions of machine learning is that learnt models are applied to data th...
Deep learning models are achieving remarkable performance on numerous tasks across various fields an...
A large body of research has shown that machine learning models are vulnerable to membership inferen...
© 2019, Springer Nature Switzerland AG. The parameters of any machine learning (ML) model are obtain...
Scenarios in which restrictions in data transfer and storage limit the possibility to compose a sing...
In addition to high accuracy, robustness is becoming increasingly important for machine learning mod...
Machine learning (ML) has established itself as a cornerstone for various critical applications rang...
Machine Learning today plays a vital role in a wide range of critical applications. To ensure ML mod...
In this work, we propose ModelPred, a framework that helps to understand the impact of changes in tr...
While machine learning systems are known to be vulnerable to data-manipulation attacks at both train...
Machine learning models are vulnerable to evasion attacks, where the attacker starts from a correctl...