Metaphor identification using large language models: A comparison of RAG, prompt engineering, and fine-tuning

  • Matteo Fuoli
  • , Weihang Huang
  • , Jeannette Littlemore
  • , Sarah Turner
  • , Ellen Wilding

    Research output: Working paper/PreprintPreprint

    Abstract

    Metaphor is a pervasive feature of discourse and a powerful lens for examining cognition, emotion, and ideology. Large-scale analysis, however, has been constrained by the need for manual annotation due to the context-sensitive nature of metaphor. This study investigates the potential of large language models (LLMs) to automate metaphor identification in full texts. We compare three methods: (i) retrieval-augmented generation (RAG), where the model is provided with a codebook and instructed to annotate texts based on its rules and examples; (ii) prompt engineering, where we design task-specific verbal instructions; and (iii) fine-tuning, where the model is trained on hand-coded texts to optimize performance. Within prompt engineering, we test zero-shot, few-shot, and chain-of-thought strategies. Our results show that state-of-the-art closed-source LLMs can achieve high accuracy, with fine-tuning yielding a median F1 score of 0.79. A comparison of human and LLM outputs reveals that most discrepancies are systematic, reflecting well-known grey areas and conceptual challenges in metaphor theory. We propose that LLMs can be used to at least partly automate metaphor identification and can serve as a testbed for developing and refining metaphor identification protocols and the theory that underpins them.
    Original languageEnglish
    DOIs
    Publication statusPublished - 29 Sept 2025

    Keywords

    • cs.CL
    • cs.AI

    Fingerprint

    Dive into the research topics of 'Metaphor identification using large language models: A comparison of RAG, prompt engineering, and fine-tuning'. Together they form a unique fingerprint.

    Cite this