kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Wiki-based Prompts for Enhancing Relation Extraction using Language Models
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.ORCID iD: 0000-0002-3264-974X
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.ORCID iD: 0000-0002-2748-8929
Oslo Metropolitan University Oslo, Norway.
SINTEF AS Oslo, Norway.
Show others and affiliations
2024 (English)In: 39th Annual ACM Symposium on Applied Computing, SAC 2024, Association for Computing Machinery (ACM) , 2024, p. 731-740Conference paper, Published paper (Refereed)
Abstract [en]

Prompt-tuning and instruction-tuning of language models have exhibited significant results in few-shot Natural Language Processing (NLP) tasks, such as Relation Extraction (RE), which involves identifying relationships between entities within a sentence. However, the effectiveness of these methods relies heavily on the design of the prompts. A compelling question is whether incorporating external knowledge can enhance the language model's understanding of NLP tasks. In this paper, we introduce wiki-based prompt construction that leverages Wikidata as a source of information to craft more informative prompts for both prompt-tuning and instruction-tuning of language models in RE. Our experiments show that using wiki-based prompts enhances cutting-edge language models in RE, emphasizing their potential for improving RE tasks. Our code and datasets are available at GitHub 1

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM) , 2024. p. 731-740
Keywords [en]
knowledge integration, language models, prompt construction, relation extraction
National Category
Natural Language Processing
Identifiers
URN: urn:nbn:se:kth:diva-350722DOI: 10.1145/3605098.3635949ISI: 001236958200108Scopus ID: 2-s2.0-85197687891OAI: oai:DiVA.org:kth-350722DiVA, id: diva2:1884688
Conference
39th Annual ACM Symposium on Applied Computing, SAC 2024, Avila, Spain, Apr 8 2024 - Apr 12 2024
Note

Part of ISBN 9798400702433

QC 20240719

Available from: 2024-07-17 Created: 2024-07-17 Last updated: 2025-02-07Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Layegh, AmirhosseinPayberah, Amir H.Matskin, Mihhail

Search in DiVA

By author/editor
Layegh, AmirhosseinPayberah, Amir H.Matskin, Mihhail
By organisation
Software and Computer systems, SCS
Natural Language Processing

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 62 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf