Few-shot Learning (FSL) is aimed to make predictions based on a limited number of samples. Structured data such as knowledge graphs and ontology libraries has been leveraged to benefit the few-shot setting in various tasks. However, the priors adopted by the existing methods suffer from challenging knowledge missing, knowledge noise, and knowledge heterogeneity, which hinder the performance for few-shot learning. In this study, we explore knowledge injection for FSL with pre-trained language models and propose ontology-enhanced prompt-tuning (OntoPrompt). Specifically, we develop the ontology transformation based on the external knowledge graph to address the knowledge missing issue, which fulfills and converts structure knowledge to text. ...
Knowledge graphs (KGs) serve as useful resources for various natural language processing application...
Knowledge graphs (KGs) are known for their large scale and knowledge inference ability, but are also...
Recent advances on large pre-trained language models (PLMs) lead impressive gains on natural languag...
Few-shot classification requires deep neural networks to learn generalized representations only from...
Few-shot learning (FSL) aims to generate a classifier using limited labeled examples. Many existing ...
Recently, there has been an increasing interest in models that generate natural language explanation...
Few-shot in-context learning (ICL) enables pre-trained language models to perform a previously-unsee...
We present a new method LiST is short for Lite Prompted Self-Training for parameter-efficient fine-t...
Few-shot relation extraction involves identifying the type of relationship between two specific enti...
Few-shot classification aims to adapt to new tasks with limited labeled examples. To fully use the a...
Compared with the traditional few-shot task, the few-shot none-of-the-above (NOTA) relation classifi...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
Natural language generation from structured data mainly focuses on surface-level descriptions, suffe...
Business analytics and machine learning have become essential success factors for various industries...
The generalization power of the pre-trained model is the key for few-shot deep learning. Dropout is ...
Knowledge graphs (KGs) serve as useful resources for various natural language processing application...
Knowledge graphs (KGs) are known for their large scale and knowledge inference ability, but are also...
Recent advances on large pre-trained language models (PLMs) lead impressive gains on natural languag...
Few-shot classification requires deep neural networks to learn generalized representations only from...
Few-shot learning (FSL) aims to generate a classifier using limited labeled examples. Many existing ...
Recently, there has been an increasing interest in models that generate natural language explanation...
Few-shot in-context learning (ICL) enables pre-trained language models to perform a previously-unsee...
We present a new method LiST is short for Lite Prompted Self-Training for parameter-efficient fine-t...
Few-shot relation extraction involves identifying the type of relationship between two specific enti...
Few-shot classification aims to adapt to new tasks with limited labeled examples. To fully use the a...
Compared with the traditional few-shot task, the few-shot none-of-the-above (NOTA) relation classifi...
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-sh...
Natural language generation from structured data mainly focuses on surface-level descriptions, suffe...
Business analytics and machine learning have become essential success factors for various industries...
The generalization power of the pre-trained model is the key for few-shot deep learning. Dropout is ...
Knowledge graphs (KGs) serve as useful resources for various natural language processing application...
Knowledge graphs (KGs) are known for their large scale and knowledge inference ability, but are also...
Recent advances on large pre-trained language models (PLMs) lead impressive gains on natural languag...