This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (pages 57-58).Logit-based regularization and pretrain-then-tune are two approaches that have recently been shown to enhance adversarial robustness of machine learning models. In the realm of regularization, Zhang et al. (2019) proposed TRADES, a logit-based regularization optimization function that has been shown to improve upon the robust optimization framework developed by Madry et al. (201...
Deep Neural Networks (DNN) have been shown to be vulnerable to adversarial examples. Adversarial tra...
Recent discoveries uncovered flaws in machine learning algorithms such as deep neural networks. Deep...
Deep neural networks are known to be vulnerable to adversarial attacks. The empirical analysis in ou...
Adversarial training (AT) is currently one of the most successful methods to obtain the adversarial ...
Extended version of paper published in ACM AISec 2019; first two authors contributed equallyInternat...
Deep learning has seen tremendous growth, largely fueled by more powerful computers, the availabilit...
Adversarial training, which is to enhance robustness against adversarial attacks, has received much ...
Adversarial training has been shown to regularize deep neural networks in addition to increasing the...
Deep neural networks have achieved state-of-the-art performance in many artificial intelligence area...
Adversarial robustness has become a central goal in deep learning, both in the theory and the practi...
Adversarial robustness has become a central goal in deep learning, both in theory and in practice. H...
Deep learning plays an important role in various disciplines, such as auto-driving, information tech...
Recent studies on the adversarial vulnerability of neural networks have shown that models trained wi...
Recent years have witnessed the remarkable success of deep neural network (DNN) models spanning a wi...
Deep neural networks are exposed to the risk of adversarial attacks via the fast gradient sign metho...
Deep Neural Networks (DNN) have been shown to be vulnerable to adversarial examples. Adversarial tra...
Recent discoveries uncovered flaws in machine learning algorithms such as deep neural networks. Deep...
Deep neural networks are known to be vulnerable to adversarial attacks. The empirical analysis in ou...
Adversarial training (AT) is currently one of the most successful methods to obtain the adversarial ...
Extended version of paper published in ACM AISec 2019; first two authors contributed equallyInternat...
Deep learning has seen tremendous growth, largely fueled by more powerful computers, the availabilit...
Adversarial training, which is to enhance robustness against adversarial attacks, has received much ...
Adversarial training has been shown to regularize deep neural networks in addition to increasing the...
Deep neural networks have achieved state-of-the-art performance in many artificial intelligence area...
Adversarial robustness has become a central goal in deep learning, both in the theory and the practi...
Adversarial robustness has become a central goal in deep learning, both in theory and in practice. H...
Deep learning plays an important role in various disciplines, such as auto-driving, information tech...
Recent studies on the adversarial vulnerability of neural networks have shown that models trained wi...
Recent years have witnessed the remarkable success of deep neural network (DNN) models spanning a wi...
Deep neural networks are exposed to the risk of adversarial attacks via the fast gradient sign metho...
Deep Neural Networks (DNN) have been shown to be vulnerable to adversarial examples. Adversarial tra...
Recent discoveries uncovered flaws in machine learning algorithms such as deep neural networks. Deep...
Deep neural networks are known to be vulnerable to adversarial attacks. The empirical analysis in ou...