kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
REA: Refine-Estimate-Answer Prompting for Zero-Shot Relation Extraction
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.ORCID iD: 0000-0002-3264-974X
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.ORCID iD: 0000-0002-2748-8929
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.ORCID iD: 0000-0002-4722-0823
2024 (English)In: Natural Language Processing and Information Systems - 29th International Conference on Applications of Natural Language to Information Systems, NLDB 2024, Proceedings, Springer Nature , 2024, p. 301-316Conference paper, Published paper (Refereed)
Abstract [en]

Zero-shot relation extraction (RE) presents the challenge of identifying entity relationships from text without training on those specific relations. Despite significant advancements in natural language processing by applying large language models (LLMs), their application to zero-shot RE remains less effective compared to traditional models that fine-tune smaller pre-trained language models. This limitation is attributed to insufficient prompting strategies that fail to leverage the full capabilities of LLMs for zero-shot RE, considering the intrinsic complexities of the RE task. A compelling question is whether LLMs can address complex tasks, such as RE, by decomposing them into more straightforward, distinct tasks that are easier to manage and solve individually. We propose the Refine-Estimate-Answer (REA) approach to answer this question. This multi-stage prompting strategy of REA decomposes the RE task into more manageable subtasks and applies iterative refinement to guide LLMs through the complex reasoning required for accurate RE. Our research validates the effectiveness of REA through comprehensive testing across multiple public RE datasets, demonstrating marked improvements over existing LLM-based frameworks. Experimental results on the FewRel, Wiki-ZSL, and TACRED datasets show that our proposed approach significantly boosts the vanilla prompting F1 scores by 31.57, 19.52, and 15.39, respectively, thereby outperforming the performance of state-of-the-art LLM-based methods.

Place, publisher, year, edition, pages
Springer Nature , 2024. p. 301-316
Keywords [en]
Large Language Models, Prompting Strategy, Relation Extraction
National Category
Natural Language Processing
Identifiers
URN: urn:nbn:se:kth:diva-354657DOI: 10.1007/978-3-031-70239-6_21Scopus ID: 2-s2.0-85205393225OAI: oai:DiVA.org:kth-354657DiVA, id: diva2:1904553
Conference
29th International Conference on Natural Language and Information Systems, NLDB 2024, Turin, Italy, Jun 25 2024 - Jun 27 2024
Note

Part of ISBN 9783031702389]

QC 20241010

Available from: 2024-10-09 Created: 2024-10-09 Last updated: 2025-02-07Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Layegh, AmirhosseinPayberah, Amir H.Matskin, Mihhail

Search in DiVA

By author/editor
Layegh, AmirhosseinPayberah, Amir H.Matskin, Mihhail
By organisation
Software and Computer systems, SCS
Natural Language Processing

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 78 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf