This paper addresses zero-shot slot filling, which tries to build a system that can generalize to unseen slot types without any training data. The key to zero-shot slot-filling is to match the tokens from the utterance with the semantic definition of the slot without training data in the target domain. This paper tackles this problem by devising a scheme to fully leverage pre-trained language models (PLMs). To this end, we propose a new prompting scheme that utilizes both learnable tokens and slot names to guide the model to focus on the relevant text spans for a given slot. Furthermore, we use attention values between tokens to form a feature descriptor for each token, which is motivated by the fact that the attention value in a PLM natura...
Recent joint intent detection and slot tagging models have seen improved performance when compared t...
International audienceLarge language models have recently been shown to attain reasonable zero-shot ...
We propose a novel framework for zero-shot learning of topic-dependent language models, which enable...
Zero-shot cross-domain slot filling aims to transfer knowledge from the labeled source domain to the...
NLP has yielded results that were unimaginable only a few years ago on a wide range of real-world ta...
We introduce a novel approach that jointly learns slot filling and delexicalized sentence generation...
Slot filling is a crucial task in the Natural Language Understanding (NLU) component of a dialogue s...
Slot filling is a core operation for utterance understanding in task-oriented dialogue systems. Slot...
Slot filling is a challenging task in Spoken Language Understanding (SLU). Supervised methods usuall...
Zero-shot learning has gained popularity due to its potential to scale recognition models without re...
Slot filling techniques are often adopted in language understanding components for task-oriented dia...
Conversational agents such as Alexa and Google Assistant constantly need to increase their language ...
We introduce Span-ConveRT, a light-weight model for dialog slot-filling which frames the task as a t...
When building spoken dialogue systems for a new domain, a major bottleneck is developing a spoken la...
Pretrained language models (PLMs) have demonstrated remarkable performance in various natural langua...
Recent joint intent detection and slot tagging models have seen improved performance when compared t...
International audienceLarge language models have recently been shown to attain reasonable zero-shot ...
We propose a novel framework for zero-shot learning of topic-dependent language models, which enable...
Zero-shot cross-domain slot filling aims to transfer knowledge from the labeled source domain to the...
NLP has yielded results that were unimaginable only a few years ago on a wide range of real-world ta...
We introduce a novel approach that jointly learns slot filling and delexicalized sentence generation...
Slot filling is a crucial task in the Natural Language Understanding (NLU) component of a dialogue s...
Slot filling is a core operation for utterance understanding in task-oriented dialogue systems. Slot...
Slot filling is a challenging task in Spoken Language Understanding (SLU). Supervised methods usuall...
Zero-shot learning has gained popularity due to its potential to scale recognition models without re...
Slot filling techniques are often adopted in language understanding components for task-oriented dia...
Conversational agents such as Alexa and Google Assistant constantly need to increase their language ...
We introduce Span-ConveRT, a light-weight model for dialog slot-filling which frames the task as a t...
When building spoken dialogue systems for a new domain, a major bottleneck is developing a spoken la...
Pretrained language models (PLMs) have demonstrated remarkable performance in various natural langua...
Recent joint intent detection and slot tagging models have seen improved performance when compared t...
International audienceLarge language models have recently been shown to attain reasonable zero-shot ...
We propose a novel framework for zero-shot learning of topic-dependent language models, which enable...