kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
A comparison of chain-of-thought reasoning strategies across datasets and models
Medical University of Vienna, Center for Medical Data Science, Institute of Artificial Intelligence, Vienna, Austria.
KTH, School of Electrical Engineering and Computer Science (EECS).ORCID iD: 0009-0005-7038-7794
Humboldt University, Berlin, Germany.
Medical University of Vienna, Center for Medical Data Science, Institute of Artificial Intelligence, Vienna, Austria.
2024 (English)In: PeerJ Computer Science, E-ISSN 2376-5992, Vol. 10, p. 1-13Article in journal (Refereed) Published
Abstract [en]

Emergent chain-of-thought (CoT) reasoning capabilities promise to improve the performance and explainability of large language models (LLMs). However, uncertainties remain about how reasoning strategies formulated for previous model generations generalize to new model generations and different datasets. In this small-scale study, we compare different reasoning strategies induced by zero-shot prompting across six recently released LLMs (davinci-002, davinci-003, GPT-3.5-turbo, GPT-4, Flan-T5- xxl and Cohere command-xlarge). We test them on six question-answering datasets that require real-world knowledge application and logical verbal reasoning, including datasets from scientific and medical domains. Our findings demonstrate that while some variations in effectiveness occur, gains from CoT reasoning strategies remain robust across different models and datasets. GPT-4 benefits the most from current state-of-the-art reasoning strategies and performs best by applying a prompt previously discovered through automated discovery.

Place, publisher, year, edition, pages
PeerJ , 2024. Vol. 10, p. 1-13
Keywords [en]
Chain-of-thought reasoning, Externalized reasoning, Large language models, Question-answering datasets, Zero-shot prompting
National Category
Language Technology (Computational Linguistics)
Identifiers
URN: urn:nbn:se:kth:diva-347297DOI: 10.7717/PEERJ-CS.1999Scopus ID: 2-s2.0-85194168538OAI: oai:DiVA.org:kth-347297DiVA, id: diva2:1867229
Note

QC 20240610

Available from: 2024-06-10 Created: 2024-06-10 Last updated: 2024-06-10Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Praas, Robert

Search in DiVA

By author/editor
Praas, Robert
By organisation
School of Electrical Engineering and Computer Science (EECS)
In the same journal
PeerJ Computer Science
Language Technology (Computational Linguistics)

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 12 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf