The task of data-to-text generation amounts to describing structured data, such as RDF triples, in fluent natural language sentences. The state-of-the-art approach in research systems today is finetuning pretrained language models (PLMs). This often leads to overfitting the data and may produce hallucinations, i.e. situations where the PLM generates outputs that are not grounded in the input, typically replicating (or amplifying) training data noise. Rather than applying a PLM as black box for the whole data-to-text task, we aim at using PLMs for simple individual subtasks, aiming to achieve broad generalization and minimize hallucination. First, we use a pipeline approach where the PLMs only work as text "editors", rather than generators,...
While pretrained language models (PLMs) have greatly improved text generation, they have also been k...
International audienceLike for many text understanding and generation tasks, pre-trained languages m...
Pretrained language models (PLMs) have made remarkable progress in text generation tasks via fine-tu...
The task of data-to-text generation amounts to describing structured data in fluent natural language...
Text Generation aims to produce plausible and readable text in a human language from input data. The...
Pretrained language models (PLMs) have demonstrated remarkable performance in various natural langua...
In data-to-text (D2T) generation, training on in-domain data leads to overfitting to the data repres...
We introduce the problems of data-to-text generation and the current state of the art, i.e. pretrain...
To obtain high-quality sentence embeddings from pretrained language models (PLMs), they must either ...
Pre-training language models (LMs) on large-scale unlabeled text data makes the model much easier to...
Natural language processing (NLP) techniques had significantly improved by introducing pre-trained l...
The research field of Natural Language Generation offers practitioners a wide range of techniques fo...
Pretrained language models (PLMs) for data-to-text (D2T) generation can use human-readable data labe...
With the recent advances in deep learning, different approaches to improving pre-trained language mo...
Pretrained language models (PLMs) have made remarkable progress in table-to-text generation tasks. H...
While pretrained language models (PLMs) have greatly improved text generation, they have also been k...
International audienceLike for many text understanding and generation tasks, pre-trained languages m...
Pretrained language models (PLMs) have made remarkable progress in text generation tasks via fine-tu...
The task of data-to-text generation amounts to describing structured data in fluent natural language...
Text Generation aims to produce plausible and readable text in a human language from input data. The...
Pretrained language models (PLMs) have demonstrated remarkable performance in various natural langua...
In data-to-text (D2T) generation, training on in-domain data leads to overfitting to the data repres...
We introduce the problems of data-to-text generation and the current state of the art, i.e. pretrain...
To obtain high-quality sentence embeddings from pretrained language models (PLMs), they must either ...
Pre-training language models (LMs) on large-scale unlabeled text data makes the model much easier to...
Natural language processing (NLP) techniques had significantly improved by introducing pre-trained l...
The research field of Natural Language Generation offers practitioners a wide range of techniques fo...
Pretrained language models (PLMs) for data-to-text (D2T) generation can use human-readable data labe...
With the recent advances in deep learning, different approaches to improving pre-trained language mo...
Pretrained language models (PLMs) have made remarkable progress in table-to-text generation tasks. H...
While pretrained language models (PLMs) have greatly improved text generation, they have also been k...
International audienceLike for many text understanding and generation tasks, pre-trained languages m...
Pretrained language models (PLMs) have made remarkable progress in text generation tasks via fine-tu...