Zero-shot relation extraction (RE) presents the challenge of identifying entity relationships from text without training on those specific relations. Despite significant advancements in natural language processing by applying large language models (LLMs), their application to zero-shot RE remains less effective compared to traditional models that fine-tune smaller pre-trained language models. This limitation is attributed to insufficient prompting strategies that fail to leverage the full capabilities of LLMs for zero-shot RE, considering the intrinsic complexities of the RE task. A compelling question is whether LLMs can address complex tasks, such as RE, by decomposing them into more straightforward, distinct tasks that are easier to manage and solve individually. We propose the Refine-Estimate-Answer (REA) approach to answer this question. This multi-stage prompting strategy of REA decomposes the RE task into more manageable subtasks and applies iterative refinement to guide LLMs through the complex reasoning required for accurate RE. Our research validates the effectiveness of REA through comprehensive testing across multiple public RE datasets, demonstrating marked improvements over existing LLM-based frameworks. Experimental results on the FewRel, Wiki-ZSL, and TACRED datasets show that our proposed approach significantly boosts the vanilla prompting F1 scores by 31.57, 19.52, and 15.39, respectively, thereby outperforming the performance of state-of-the-art LLM-based methods.
Part of ISBN 9783031702389]
QC 20241010