Data augmentation has been an important ingredient for boosting performances of learned models. Prior data augmentation methods for few-shot text classification have led to great performance boosts. However, they have not been designed to capture the intricate compositional structure of natural language. As a result, they fail to generate samples with plausible and diverse sentence structures. Motivated by this, we present the data Augmentation using Lexicalized Probabilistic context-free grammars (ALP) that generates augmented samples with diverse syntactic structures with plausible grammar. The lexicalized PCFG parse trees consider both the constituents and dependencies to produce a syntactic frame that maximizes a variety of word choices...
We evaluated probabilistic lexicalized tree-insertion grammars (PLTIGs) on a classification task rel...
We evaluated probabilistic lexicalized tree-insertion grammars (PLTIGs) on a classification task rel...
Providing pretrained language models with simple task descriptions in natural language enables them ...
In recent years, the community of natural language processing (NLP) has seen amazing progress in the...
Ambiguity resolution in the parsing of natural language requires a vast repository of knowledge to g...
In many cases of machine learning, research suggests that the development of training data might hav...
In this paper, we explore how to utilize pre-trained language model to perform few-shot text classif...
We present an empirical study of the applicability of Probabilistic Lexicalized Tree Insertion Gramm...
The task of unsupervised induction of probabilistic context-free grammars (PCFGs) has attracted a lo...
Inducing a grammar from text has proven to be a notoriously challenging learning task despite decade...
Data augmentation techniques are widely used for enhancing the performance of machine learning model...
We propose a semi-supervised bootstrap learning framework for few-shot text classification. From a s...
Recent advances on large pre-trained language models (PLMs) lead impressive gains on natural languag...
The task of unsupervised induction of probabilistic context-free grammars (PCFGs) has attracted a lo...
This article uses semi-supervised Expectation Maximization (EM) to learn lexico-syntactic dependenci...
We evaluated probabilistic lexicalized tree-insertion grammars (PLTIGs) on a classification task rel...
We evaluated probabilistic lexicalized tree-insertion grammars (PLTIGs) on a classification task rel...
Providing pretrained language models with simple task descriptions in natural language enables them ...
In recent years, the community of natural language processing (NLP) has seen amazing progress in the...
Ambiguity resolution in the parsing of natural language requires a vast repository of knowledge to g...
In many cases of machine learning, research suggests that the development of training data might hav...
In this paper, we explore how to utilize pre-trained language model to perform few-shot text classif...
We present an empirical study of the applicability of Probabilistic Lexicalized Tree Insertion Gramm...
The task of unsupervised induction of probabilistic context-free grammars (PCFGs) has attracted a lo...
Inducing a grammar from text has proven to be a notoriously challenging learning task despite decade...
Data augmentation techniques are widely used for enhancing the performance of machine learning model...
We propose a semi-supervised bootstrap learning framework for few-shot text classification. From a s...
Recent advances on large pre-trained language models (PLMs) lead impressive gains on natural languag...
The task of unsupervised induction of probabilistic context-free grammars (PCFGs) has attracted a lo...
This article uses semi-supervised Expectation Maximization (EM) to learn lexico-syntactic dependenci...
We evaluated probabilistic lexicalized tree-insertion grammars (PLTIGs) on a classification task rel...
We evaluated probabilistic lexicalized tree-insertion grammars (PLTIGs) on a classification task rel...
Providing pretrained language models with simple task descriptions in natural language enables them ...