Pretrained language models (PLMs) have made remarkable progress in table-to-text generation tasks. However, the lack of domain-specific knowledge makes it challenging to bridge the topological gap between tabular data and text, especially in real-world applications with limited resources. To mitigate the limitation of insufficient labeled data, we propose a novel framework: Adapt-Prompt-to-Generate (AdaPTGen). The core insight of AdaPTGen is to adapt prompt templates of domain-specific knowledge into the model, which brings at least three benefits: (1) it injects representation of normal table-related descriptions to bridge the topological gap between tabular data and texts; (2) it enables us to use large amounts of unlabeled domain-specifi...
In recent years, the community of natural language processing (NLP) has seen amazing progress in the...
Language models, such as GPT-3.5 and ChatGPT, demonstrate remarkable abilities to follow diverse hum...
Large Language Models (LLMs) have demonstrated impressive zero shot performance on a wide range of N...
Domain-specific text classification faces the challenge of scarce labeled data due to the high cost ...
Pretrained language models (PLMs) have made remarkable progress in text generation tasks via fine-tu...
High-quality instruction-tuning data is critical to improving LLM capabilities. Existing data collec...
Recent advances on large pre-trained language models (PLMs) lead impressive gains on natural languag...
Natural language generation from structured data mainly focuses on surface-level descriptions, suffe...
Data-to-text generation systems aim to generate text descriptions based on input data (often represe...
Large language models (LLMs) can learn to perform a wide range of natural language tasks from just a...
Text Generation aims to produce plausible and readable text in a human language from input data. The...
In recent years, there has been significant progress in developing pre-trained language models for N...
Pretrained language models (PLMs) have demonstrated remarkable performance in various natural langua...
The task of data-to-text generation amounts to describing structured data, such as RDF triples, in f...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
In recent years, the community of natural language processing (NLP) has seen amazing progress in the...
Language models, such as GPT-3.5 and ChatGPT, demonstrate remarkable abilities to follow diverse hum...
Large Language Models (LLMs) have demonstrated impressive zero shot performance on a wide range of N...
Domain-specific text classification faces the challenge of scarce labeled data due to the high cost ...
Pretrained language models (PLMs) have made remarkable progress in text generation tasks via fine-tu...
High-quality instruction-tuning data is critical to improving LLM capabilities. Existing data collec...
Recent advances on large pre-trained language models (PLMs) lead impressive gains on natural languag...
Natural language generation from structured data mainly focuses on surface-level descriptions, suffe...
Data-to-text generation systems aim to generate text descriptions based on input data (often represe...
Large language models (LLMs) can learn to perform a wide range of natural language tasks from just a...
Text Generation aims to produce plausible and readable text in a human language from input data. The...
In recent years, there has been significant progress in developing pre-trained language models for N...
Pretrained language models (PLMs) have demonstrated remarkable performance in various natural langua...
The task of data-to-text generation amounts to describing structured data, such as RDF triples, in f...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
In recent years, the community of natural language processing (NLP) has seen amazing progress in the...
Language models, such as GPT-3.5 and ChatGPT, demonstrate remarkable abilities to follow diverse hum...
Large Language Models (LLMs) have demonstrated impressive zero shot performance on a wide range of N...