Download a PDF of the paper titled Generalizable Chain-of-Thought Prompting in Mixed-task Scenarios with Large Language Models, by Anni Zou and 3 other authors
Abstract:Large language models (LLMs) have unveiled remarkable reasoning capabilities by exploiting chain-of-thought (CoT) prompting, which generates intermediate reasoning chains to serve as the rationale for deriving the answer. However, current CoT methods either simply employ general prompts such as Let’s think step by step, or heavily rely on pre-defined task-specific demonstrations to attain preferable performances, thereby engendering an inescapable gap between performance and generalization. To bridge this gap, we propose GeM-CoT, a Generalizable CoT prompting mechanism in Mixed-task scenarios where the type of input questions is unknown. GeM-CoT first categorizes the question type and subsequently samples or constructs demonstrations from the corresponding data pool in an automatic pattern. With this technical design, GeM-CoT simultaneously enjoys superior generalization capabilities and remarkable performances on 10 public reasoning tasks and 23 BBH tasks.
Submission history
From: Anni Zou [view email]
[v1]
Tue, 10 Oct 2023 15:10:03 UTC (2,864 KB)
[v2]
Wed, 11 Oct 2023 10:05:29 UTC (2,859 KB)
[v3]
Tue, 20 Feb 2024 15:27:20 UTC (6,788 KB)