Named Entity Recognition is a fundamental task in information extraction and is an essential element for various Natural Language Processing pipelines. Adversarial attacks have been shown to greatly affect the performance of text classification systems but knowledge about their effectiveness against named entity recognition models is limited. This paper investigates the effectiveness and portability of adversarial attacks from text classification to named entity recognition and the ability of adversarial training to counteract these attacks. We find that character-level and word-level attacks are the most effective, but adversarial training can grant significant protection at little to no expense of standard performance. Alongside our resul...
International audienceSince the Message Understanding Conferences on Information Extraction in the 8...
Speaker recognition has become very popular in many application scenarios, such as smart homes and s...
NLP researchers propose different word-substitute black-box attacks that can fool text classificatio...
Named Entity Recognition is a fundamental task in information extraction and is an essential element...
We study an important and challenging task of attacking natural language processing models in a hard...
In recent years, it has been seen that deep neural networks are lacking robustness and are vulnerabl...
Recent studies have shown that natural language processing (NLP) models are vulnerable to adversaria...
Text classification is a basic task in natural language processing, but the small character perturba...
We study an important task of attacking natural language processing models in a black box setting. W...
Machine learning algorithms are often vulnerable to adversarial examples that have imperceptible alt...
This thesis focuses on named entity recognition applied to email phishing detection. Named entity re...
Named Entity Recognition (NER) aims to extract and to classify rigid designators in text such as pro...
Named entity recognition models (NER), are widely used for identifying named entities (e.g., individ...
Recent studies show that pre-trained language models (LMs) are vulnerable to textual adversarial att...
In order to obtain high quality and large-scale labelled data for information security research, we ...
International audienceSince the Message Understanding Conferences on Information Extraction in the 8...
Speaker recognition has become very popular in many application scenarios, such as smart homes and s...
NLP researchers propose different word-substitute black-box attacks that can fool text classificatio...
Named Entity Recognition is a fundamental task in information extraction and is an essential element...
We study an important and challenging task of attacking natural language processing models in a hard...
In recent years, it has been seen that deep neural networks are lacking robustness and are vulnerabl...
Recent studies have shown that natural language processing (NLP) models are vulnerable to adversaria...
Text classification is a basic task in natural language processing, but the small character perturba...
We study an important task of attacking natural language processing models in a black box setting. W...
Machine learning algorithms are often vulnerable to adversarial examples that have imperceptible alt...
This thesis focuses on named entity recognition applied to email phishing detection. Named entity re...
Named Entity Recognition (NER) aims to extract and to classify rigid designators in text such as pro...
Named entity recognition models (NER), are widely used for identifying named entities (e.g., individ...
Recent studies show that pre-trained language models (LMs) are vulnerable to textual adversarial att...
In order to obtain high quality and large-scale labelled data for information security research, we ...
International audienceSince the Message Understanding Conferences on Information Extraction in the 8...
Speaker recognition has become very popular in many application scenarios, such as smart homes and s...
NLP researchers propose different word-substitute black-box attacks that can fool text classificatio...