The question of how to probe contextual word representations in a way that is principled and useful has seen significant recent attention. In our contribution to this discussion, we argue, first, for a probe metric that reflects the trade-off between probe complexity and performance: the Pareto hypervolume. To measure complexity, we present a number of parametric and non-parametric metrics. Our experiments with such metrics show that probe{'}s performance curves often fail to align with widely accepted rankings between language representations (with, e.g., non-contextual representations outperforming contextual ones). These results lead us to argue, second, that common simplistic probe tasks such as POS labeling and dependency arc labeling,...
Information theoretic measures of incremental parser load were generated from a phrase structure par...
Quantifying the complexity of a natural language is a difficult task on its own and comparing two or...
In this dissertation, we have proposed novel methods for robust parsing that integrate the flexibili...
Previous work on probing word representations for linguistic knowledge has focused on interpolation ...
International audienceThis paper provides a computable quantitative measure which accounts for the d...
Pre-trained contextual representations have led to dramatic performance improvements on a range of d...
We introduce VAST, the Valence-Assessing Semantics Test, a novel intrinsic evaluation task for conte...
We study the influence of context on how humans evaluate the complexity of a sentence in English. We...
Significant amount of work has been devoted recently to develop learning techniques that can be used...
Through in-context learning (ICL), large-scale language models are effective few-shot learners witho...
Parsing for mildly context-sensitive language formalisms is an important area within natural languag...
International audienceIt has been shown that complexity metrics, computed by a syntactic parser, is ...
© 2019 Association for Computational Linguistics Lexical simplification systems replace complex word...
Parsing for mildly context-sensitive language formalisms is an important area within natural languag...
Classifiers trained on auxiliary probing tasks are a popular tool to analyze the representations lea...
Information theoretic measures of incremental parser load were generated from a phrase structure par...
Quantifying the complexity of a natural language is a difficult task on its own and comparing two or...
In this dissertation, we have proposed novel methods for robust parsing that integrate the flexibili...
Previous work on probing word representations for linguistic knowledge has focused on interpolation ...
International audienceThis paper provides a computable quantitative measure which accounts for the d...
Pre-trained contextual representations have led to dramatic performance improvements on a range of d...
We introduce VAST, the Valence-Assessing Semantics Test, a novel intrinsic evaluation task for conte...
We study the influence of context on how humans evaluate the complexity of a sentence in English. We...
Significant amount of work has been devoted recently to develop learning techniques that can be used...
Through in-context learning (ICL), large-scale language models are effective few-shot learners witho...
Parsing for mildly context-sensitive language formalisms is an important area within natural languag...
International audienceIt has been shown that complexity metrics, computed by a syntactic parser, is ...
© 2019 Association for Computational Linguistics Lexical simplification systems replace complex word...
Parsing for mildly context-sensitive language formalisms is an important area within natural languag...
Classifiers trained on auxiliary probing tasks are a popular tool to analyze the representations lea...
Information theoretic measures of incremental parser load were generated from a phrase structure par...
Quantifying the complexity of a natural language is a difficult task on its own and comparing two or...
In this dissertation, we have proposed novel methods for robust parsing that integrate the flexibili...