Large language models (LLMs) can perform complex reasoning by generating intermediate reasoning steps. Providing these steps for prompting demonstrations is called chain-of-thought (CoT) prompting. CoT prompting has two major paradigms. One leverages a simple prompt like "Let's think step by step" to facilitate step-by-step thinking before answering a question. The other uses a few manual demonstrations one by one, each composed of a question and a reasoning chain that leads to an answer. The superior performance of the second paradigm hinges on the hand-crafting of task-specific demonstrations one by one. We show that such manual efforts may be eliminated by leveraging LLMs with the "Let's think step by step" prompt to generate reasoning c...
Large language models (LLMs) transfer well to new tasks out-of-the-box simply given a natural langua...
To augment language models with the ability to reason, researchers usually prompt or finetune them t...
Chain-of-thought prompting combined with pre-trained large language models has achieved encouraging ...
We explore how generating a chain of thought -- a series of intermediate reasoning steps -- signific...
Emergent chain-of-thought (CoT) reasoning capabilities promise to improve performance and explainabi...
Large language models (LLMs) have shown remarkable reasoning capabilities given chain-of-thought pro...
Pretrained large language models (LLMs) are widely used in many sub-fields of natural language proce...
Large language models (LLMs) have a substantial capacity for high-level analogical reasoning: reprod...
Abstract Large language models (LLMs) such as GPT-4 have recently demonstrated impressive results ac...
Most existing chain-of-thought (CoT) prompting methods suffer from the issues of generalizability an...
Large language models (LMs) beyond a certain scale, demonstrate the emergent capability of generatin...
Chain-of-Thought (CoT) is a technique that guides Large Language Models (LLMs) to decompose complex ...
Large language models (LLMs) have achieved remarkable advancements in the field of natural language ...
Logical reasoning remains a pivotal component within the realm of artificial intelligence. The recen...
Language models (LMs) with less than 100B parameters are known to perform poorly on chain-of-thought...
Large language models (LLMs) transfer well to new tasks out-of-the-box simply given a natural langua...
To augment language models with the ability to reason, researchers usually prompt or finetune them t...
Chain-of-thought prompting combined with pre-trained large language models has achieved encouraging ...
We explore how generating a chain of thought -- a series of intermediate reasoning steps -- signific...
Emergent chain-of-thought (CoT) reasoning capabilities promise to improve performance and explainabi...
Large language models (LLMs) have shown remarkable reasoning capabilities given chain-of-thought pro...
Pretrained large language models (LLMs) are widely used in many sub-fields of natural language proce...
Large language models (LLMs) have a substantial capacity for high-level analogical reasoning: reprod...
Abstract Large language models (LLMs) such as GPT-4 have recently demonstrated impressive results ac...
Most existing chain-of-thought (CoT) prompting methods suffer from the issues of generalizability an...
Large language models (LMs) beyond a certain scale, demonstrate the emergent capability of generatin...
Chain-of-Thought (CoT) is a technique that guides Large Language Models (LLMs) to decompose complex ...
Large language models (LLMs) have achieved remarkable advancements in the field of natural language ...
Logical reasoning remains a pivotal component within the realm of artificial intelligence. The recen...
Language models (LMs) with less than 100B parameters are known to perform poorly on chain-of-thought...
Large language models (LLMs) transfer well to new tasks out-of-the-box simply given a natural langua...
To augment language models with the ability to reason, researchers usually prompt or finetune them t...
Chain-of-thought prompting combined with pre-trained large language models has achieved encouraging ...