Recently, the vulnerability of deep neural network (DNN)-based audio systems to adversarial attacks has obtained increasing attention. However, the existing audio adversarial attacks allow the adversary to possess the entire user's audio input as well as granting sufficient time budget to generate the adversarial perturbations. These idealized assumptions, however, make the existing audio adversarial attacks mostly impossible to be launched in a timely fashion in practice (e.g., playing unnoticeable adversarial perturbations along with user's streaming input). To overcome these limitations, in this paper we propose fast audio adversarial perturbation generator (FAPG), which uses generative model to generate adversarial perturbations for the...
Speaker recognition is a task that identifies the speaker from multiple audios. Recently, advances i...
The effective deployment of smart service systems within homes, workspaces and cities, requires gain...
Deep Convolutional Networks (DCNs) have been shown to be vulnerable to adversarial examples—perturbe...
Deep neural networks (DNNs) have been widely used in many important applications, such as computer v...
Deep neural networks (DNNs) continue to demonstrate superior generalization performance in an increa...
Deep neural networks (DNNs) serve as a backbone of many image, language and speech processing system...
Recently, adversarial attacks for audio recognition have attracted much attention. However, most of ...
Automatic speech recognition (ASR) is an essential technology used in commercial products nowadays. ...
Deep neural networks (DNNs) are vulnerable to adversarial attack despite their tremendous success in...
Deep neural networks (DNNs) are vulnerable to adversarial attack despite their tremendous success in...
An adversarial attack is a method to generate perturbations to the input of a machine learning model...
With the widespread use of machine learning techniques in many areas of our life (e.g., recognizing ...
Voice-user interface (VUI) has exploded in popularity due to the recent advances in automatic speech...
In this work, we demonstrate the existence of universal adversarial audio perturbations that cause m...
Automatic Speech Recognition (ASR) systems have been growing in prevalence together with the advance...
Speaker recognition is a task that identifies the speaker from multiple audios. Recently, advances i...
The effective deployment of smart service systems within homes, workspaces and cities, requires gain...
Deep Convolutional Networks (DCNs) have been shown to be vulnerable to adversarial examples—perturbe...
Deep neural networks (DNNs) have been widely used in many important applications, such as computer v...
Deep neural networks (DNNs) continue to demonstrate superior generalization performance in an increa...
Deep neural networks (DNNs) serve as a backbone of many image, language and speech processing system...
Recently, adversarial attacks for audio recognition have attracted much attention. However, most of ...
Automatic speech recognition (ASR) is an essential technology used in commercial products nowadays. ...
Deep neural networks (DNNs) are vulnerable to adversarial attack despite their tremendous success in...
Deep neural networks (DNNs) are vulnerable to adversarial attack despite their tremendous success in...
An adversarial attack is a method to generate perturbations to the input of a machine learning model...
With the widespread use of machine learning techniques in many areas of our life (e.g., recognizing ...
Voice-user interface (VUI) has exploded in popularity due to the recent advances in automatic speech...
In this work, we demonstrate the existence of universal adversarial audio perturbations that cause m...
Automatic Speech Recognition (ASR) systems have been growing in prevalence together with the advance...
Speaker recognition is a task that identifies the speaker from multiple audios. Recently, advances i...
The effective deployment of smart service systems within homes, workspaces and cities, requires gain...
Deep Convolutional Networks (DCNs) have been shown to be vulnerable to adversarial examples—perturbe...