Different summarization requirements could make the writing of a good summarymore difficult, or easier. Summary length and the characteristics of the input are such constraints influencing the quality of a potential summary. In this paper we report the results of a quantitative analysis on data from large-scale evaluations of multi-document summarization, empirically confirming this hypothesis. We further show that features measuring the cohesiveness of the input are highly correlated with eventual summary quality and that it is possible to use these as features to predict the difficulty of new, unseen, summarization inputs
This paper describes a multidocument summarizer built upon research into the detection of new inform...
To date, few attempts have been made to develop and validate methods for automatic evaluation of lin...
Automatic text summarization is a particularly challenging Natural Language Processing (NLP) task in...
Different summarization requirements could make the writing of a good summarymore difficult, or easi...
Automatic summarization has advanced greatly in the past few decades. However, there remains a huge ...
Automatic summarization has advanced greatly in the past few decades. However, there remains a huge ...
Human-quality text summarization systems are difficult to design, and even more difficult to evaluat...
Two methods are used for evaluation of summarization systems: an evaluation of generated summaries a...
We address the task of automatically predicting if summarization system performance will be good or ...
To date, few attempts have been made to develop and validate methods for automatic evaluation of lin...
Two methods are used for evaluation of summarization systems: an evaluation of generated summaries a...
A well-known challenge for multi-document summarization (MDS) is that a single best or “gold standar...
Human-quality text summarization systems are difficult to design, and even more difficult to evaluat...
A well-known challenge for multi-document summarization (MDS) is that a single best or “gold standar...
A well-known challenge for multi-document summarization (MDS) is that a single best or “gold standar...
This paper describes a multidocument summarizer built upon research into the detection of new inform...
To date, few attempts have been made to develop and validate methods for automatic evaluation of lin...
Automatic text summarization is a particularly challenging Natural Language Processing (NLP) task in...
Different summarization requirements could make the writing of a good summarymore difficult, or easi...
Automatic summarization has advanced greatly in the past few decades. However, there remains a huge ...
Automatic summarization has advanced greatly in the past few decades. However, there remains a huge ...
Human-quality text summarization systems are difficult to design, and even more difficult to evaluat...
Two methods are used for evaluation of summarization systems: an evaluation of generated summaries a...
We address the task of automatically predicting if summarization system performance will be good or ...
To date, few attempts have been made to develop and validate methods for automatic evaluation of lin...
Two methods are used for evaluation of summarization systems: an evaluation of generated summaries a...
A well-known challenge for multi-document summarization (MDS) is that a single best or “gold standar...
Human-quality text summarization systems are difficult to design, and even more difficult to evaluat...
A well-known challenge for multi-document summarization (MDS) is that a single best or “gold standar...
A well-known challenge for multi-document summarization (MDS) is that a single best or “gold standar...
This paper describes a multidocument summarizer built upon research into the detection of new inform...
To date, few attempts have been made to develop and validate methods for automatic evaluation of lin...
Automatic text summarization is a particularly challenging Natural Language Processing (NLP) task in...