In any system that uses structured knowledgegraph (KG) data as its underlying knowledge representation, KG-to-text generation is a useful tool for turning parts of the graph data into text that can be understood by humans. Recent work has shown that models that make use of pretraining on large amounts of text data can perform well on the KG-to-text task, even with relatively little training data on the specific graph-to-text task. In this paper, we build on this concept by using large language models to perform zero-shot generation based on nothing but the model’s understanding of the triple structure from what it can read. We show that ChatGPT achieves near state-of-the-art performance on some measures of the WebNLG 2020 challenge, but fal...
Large-scale knowledge graphs (KGs) are shown to become more important in current information systems...
Knowledge graphs (KGs) contain rich information about world knowledge, entities, and relations. Thus...
International audienceWe propose the use of controlled natural language as a target for knowledge gr...
In any system that uses structured knowledgegraph (KG) data as its underlying knowledge representati...
Constructing knowledge graphs (KGs) is essential for various natural language understanding tasks, s...
The use of knowledge graphs (KGs) enhances the accuracy and comprehensiveness of the responses provi...
Recent improvements in KG-to-text generation are due to additional auxiliary pre-training tasks desi...
Deep Learning advances have enabled more fluent and flexible text generation. However, while these n...
Knowledge graph (KG) has been fully considered in natural language generation (NLG) tasks. A KG can ...
Insufficient training data is a key challenge for text classification. In particular, long-tail clas...
We introduce the problems of data-to-text generation and the current state of the art, i.e. pretrain...
Modern language models are strong at generating grammatically correct, natural lan- guage. However, ...
Knowledge graphs play a vital role in numerous artificial intelligence tasks, yet they frequently fa...
Pre-trained language representation models, such as BERT, capture a general language representation ...
As the field of Large Language Models (LLMs) evolves at an accelerated pace, the critical need to as...
Large-scale knowledge graphs (KGs) are shown to become more important in current information systems...
Knowledge graphs (KGs) contain rich information about world knowledge, entities, and relations. Thus...
International audienceWe propose the use of controlled natural language as a target for knowledge gr...
In any system that uses structured knowledgegraph (KG) data as its underlying knowledge representati...
Constructing knowledge graphs (KGs) is essential for various natural language understanding tasks, s...
The use of knowledge graphs (KGs) enhances the accuracy and comprehensiveness of the responses provi...
Recent improvements in KG-to-text generation are due to additional auxiliary pre-training tasks desi...
Deep Learning advances have enabled more fluent and flexible text generation. However, while these n...
Knowledge graph (KG) has been fully considered in natural language generation (NLG) tasks. A KG can ...
Insufficient training data is a key challenge for text classification. In particular, long-tail clas...
We introduce the problems of data-to-text generation and the current state of the art, i.e. pretrain...
Modern language models are strong at generating grammatically correct, natural lan- guage. However, ...
Knowledge graphs play a vital role in numerous artificial intelligence tasks, yet they frequently fa...
Pre-trained language representation models, such as BERT, capture a general language representation ...
As the field of Large Language Models (LLMs) evolves at an accelerated pace, the critical need to as...
Large-scale knowledge graphs (KGs) are shown to become more important in current information systems...
Knowledge graphs (KGs) contain rich information about world knowledge, entities, and relations. Thus...
International audienceWe propose the use of controlled natural language as a target for knowledge gr...