Shared Task Evaluation Challenges (stecs) have only recently begun in the field of nlg. The tuna stecs, which focused on Referring Expression Generation (reg), have been part of this development since its inception. This chapter looks back on the experience of organising the three tuna Challenges, which came to an end in 2009. While we discuss the role of the stecs in yielding a substantial body of research on the reg problem, which has opened new avenues for future research, our main focus is on the role of different evaluation methods in assessing the output quality of reg algorithms, and on the relationship between such methods.peer-reviewe
While natural language generation (NLG) has a strong evaluation tradition, in particular in userbase...
We introduce GEM, a living benchmark for natural language Generation (NLG), its Evaluation, and Metr...
Starting in 2007, the field of natural language generation (NLG) has organised shared-task evaluatio...
As one of the most well-defined subtasks in Natural Language Generation (NLG), the generation of ref...
Natural language generation (NLG) is a subfield of natural language processing (NLP) that is often c...
Referring expression generation has recently been the subject of the first Shared Task Challenge in ...
Until recently, referring expression generation (reg) research focused on the task of selecting the...
The Natural Language Generation community is currently engaged in discussion as to whether and how t...
The Natural Language Generation community is currently engaged in discussion as to whether and how t...
The research field of Natural Language Generation offers practitioners a wide range of techniques fo...
Acknowledgments We gratefully acknowledge the anonymous reviewers for their very helpful comments.Pu...
The research field of Natural Language Generation offers practitioners a wide range of techniques fo...
NLG researchers often use uncontrolled corpora to train and evaluate their systems, using textual si...
International audienceWe introduce GEM, a living benchmark for natural language Generation (NLG), it...
International audienceWe introduce GEM, a living benchmark for natural language Generation (NLG), it...
While natural language generation (NLG) has a strong evaluation tradition, in particular in userbase...
We introduce GEM, a living benchmark for natural language Generation (NLG), its Evaluation, and Metr...
Starting in 2007, the field of natural language generation (NLG) has organised shared-task evaluatio...
As one of the most well-defined subtasks in Natural Language Generation (NLG), the generation of ref...
Natural language generation (NLG) is a subfield of natural language processing (NLP) that is often c...
Referring expression generation has recently been the subject of the first Shared Task Challenge in ...
Until recently, referring expression generation (reg) research focused on the task of selecting the...
The Natural Language Generation community is currently engaged in discussion as to whether and how t...
The Natural Language Generation community is currently engaged in discussion as to whether and how t...
The research field of Natural Language Generation offers practitioners a wide range of techniques fo...
Acknowledgments We gratefully acknowledge the anonymous reviewers for their very helpful comments.Pu...
The research field of Natural Language Generation offers practitioners a wide range of techniques fo...
NLG researchers often use uncontrolled corpora to train and evaluate their systems, using textual si...
International audienceWe introduce GEM, a living benchmark for natural language Generation (NLG), it...
International audienceWe introduce GEM, a living benchmark for natural language Generation (NLG), it...
While natural language generation (NLG) has a strong evaluation tradition, in particular in userbase...
We introduce GEM, a living benchmark for natural language Generation (NLG), its Evaluation, and Metr...
Starting in 2007, the field of natural language generation (NLG) has organised shared-task evaluatio...