This thesis studies the effect of adding a term usually neglected during the training phase of energy-based models. It is called ``KL term'' because of its dependence on Kullback-Leibler divergence, and it does not have a significant impact on the training in terms of running time and computational cost. I will initially present an analysis of its impact on training stability and some considerations relative to the general structure of the learning model. I will then study the denoising capabilities of the model by implementing top-down processes applied to different types of noisy input. Thirdly, to understand the quality of internal representations emerging in the hidden layers, I will apply a read-out classifier to the deepest hidden...
Recent discoveries uncovered flaws in machine learning algorithms such as deep neural networks. Deep...
We investigate two causes for adversarial vulnerability in deep neural networks: bad data and (poorl...
In this paper, we show that adversarial training time attacks by a few pixel modifications can cause...
Deep learning plays an important role in various disciplines, such as auto-driving, information tech...
From simple time series forecasting to computer security and autonomous systems, machine learning (M...
In standard Deep Neural Network (DNN) based classifiers, the general convention is to omit the activ...
The understanding of generalization in machine learning is in a state of flux. This is partly due to...
This paper serves as an investigation in the use of energy-based models for adversarial defense via ...
In recent years, machine learning algorithms have been applied widely in various fields such as heal...
A convolution neural network (CNN) is a type of neural network commonly used to analyze visual image...
Adversarial training (AT) and its variants have spearheaded progress in improving neural network rob...
Deep learning is a machine learning technique that enables computers to learn directly from images, ...
Deep neural networks are exposed to the risk of adversarial attacks via the fast gradient sign metho...
Deep Learning (read neural networks) has emerged as one of the most exciting and powerful tools in t...
Thesis (Master's)--University of Washington, 2021Carefully crafted input has been shown to cause mis...
Recent discoveries uncovered flaws in machine learning algorithms such as deep neural networks. Deep...
We investigate two causes for adversarial vulnerability in deep neural networks: bad data and (poorl...
In this paper, we show that adversarial training time attacks by a few pixel modifications can cause...
Deep learning plays an important role in various disciplines, such as auto-driving, information tech...
From simple time series forecasting to computer security and autonomous systems, machine learning (M...
In standard Deep Neural Network (DNN) based classifiers, the general convention is to omit the activ...
The understanding of generalization in machine learning is in a state of flux. This is partly due to...
This paper serves as an investigation in the use of energy-based models for adversarial defense via ...
In recent years, machine learning algorithms have been applied widely in various fields such as heal...
A convolution neural network (CNN) is a type of neural network commonly used to analyze visual image...
Adversarial training (AT) and its variants have spearheaded progress in improving neural network rob...
Deep learning is a machine learning technique that enables computers to learn directly from images, ...
Deep neural networks are exposed to the risk of adversarial attacks via the fast gradient sign metho...
Deep Learning (read neural networks) has emerged as one of the most exciting and powerful tools in t...
Thesis (Master's)--University of Washington, 2021Carefully crafted input has been shown to cause mis...
Recent discoveries uncovered flaws in machine learning algorithms such as deep neural networks. Deep...
We investigate two causes for adversarial vulnerability in deep neural networks: bad data and (poorl...
In this paper, we show that adversarial training time attacks by a few pixel modifications can cause...