Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le, Denny Zhou
Paper
From BibTeX import
Advances in Neural Information Processing Systems 35, pp. 24824–24837, 2022
DOI: 10.52202/068431-1800
Notes
riva2026task cites chain-of-thought prompting as a deployment-time procedure that creates longer informative contexts and improves performance within a frozen model, while leaving the underlying separation set untouched. The cite anchors the claim that prompting changes readout selection, never the representational repertoire.
References
No references yet.