Math word problems (MWPs) is a task that automatically derives solution expression from a giving math problems in text. The previous studies suffer from spurious correlations between input text and output expression. To mitigate this issue, we propose a self-consistent reasoning framework called SCR, which attempts to adopt a pruning strategy to correct the output distribution shift so as to implicitly fix those spurious correlative samples. Specifically, we firstly obtain a sub-network by pruning a roberta2tree model, for the sake to use the gap on output distribution between the original roberta2tree model and the pruned sub-network to expose spurious correlative samples. Then, we calibrate the output distribution shift by applying symmet...
When applied to question answering and other text generation tasks, language models (LMs) may be que...
In mathematical word problem solving, a relatively well-established finding is that more errors are ...
While large pre-trained language models (PLM) have shown their great skills at solving discriminativ...
Chain-of-thought prompting combined with pre-trained large language models has achieved encouraging ...
In this paper, we revisit the solving bias when evaluating models on current Math Word Problem (MWP)...
Large-scale pre-trained language models (PLMs) bring new opportunities to challenge problems, especi...
While large pre-trained language models are powerful, their predictions often lack logical consisten...
Large language models (LMs) beyond a certain scale, demonstrate the emergent capability of generatin...
From the latter half of the last decade, there has been a growing interest in developing algorithms ...
Math word problem (MWP) solving faces a dilemma in number representation learning. In order to avoid...
Mathematical word problems (MWP) test critical aspects of reading comprehension in conjunction with ...
Word problems are difficult. Although children eventually master computational skills, problem solvi...
We study whether language models can evaluate the validity of their own claims and predict which que...
Recent Language Models (LMs) achieve breakthrough performance in code generation when trained on hum...
The derivation of mathematical results in specialised fields, using Large Language Models (LLMs), is...
When applied to question answering and other text generation tasks, language models (LMs) may be que...
In mathematical word problem solving, a relatively well-established finding is that more errors are ...
While large pre-trained language models (PLM) have shown their great skills at solving discriminativ...
Chain-of-thought prompting combined with pre-trained large language models has achieved encouraging ...
In this paper, we revisit the solving bias when evaluating models on current Math Word Problem (MWP)...
Large-scale pre-trained language models (PLMs) bring new opportunities to challenge problems, especi...
While large pre-trained language models are powerful, their predictions often lack logical consisten...
Large language models (LMs) beyond a certain scale, demonstrate the emergent capability of generatin...
From the latter half of the last decade, there has been a growing interest in developing algorithms ...
Math word problem (MWP) solving faces a dilemma in number representation learning. In order to avoid...
Mathematical word problems (MWP) test critical aspects of reading comprehension in conjunction with ...
Word problems are difficult. Although children eventually master computational skills, problem solvi...
We study whether language models can evaluate the validity of their own claims and predict which que...
Recent Language Models (LMs) achieve breakthrough performance in code generation when trained on hum...
The derivation of mathematical results in specialised fields, using Large Language Models (LLMs), is...
When applied to question answering and other text generation tasks, language models (LMs) may be que...
In mathematical word problem solving, a relatively well-established finding is that more errors are ...
While large pre-trained language models (PLM) have shown their great skills at solving discriminativ...