Leveraging LLMs for Identifying types of Misinformation
Koustav Rudra; Niloy Ganguly; Jeanne Mifsud Bonnici; Eric Müller-Budack; Ritumbra Manuvie; Anett Hoppe; Ran Yu; Jiqun Liu; Nilavra Bhattacharya (Hrsg). Proceedings of the Joint 1st International Workshop on Disinformation and Misinformation in the Age of Generative AI (DISMISS-FAKE 2025) and the 4th International Workshop on Investigating Learning during Web Search (IWILDS 2025) : co-located with 18th International ACM WSDM Conference on Web Search and Data Mining (WSDM 2025). Aachen: CEUR/RWTH 2025 S. 39 - 46
Erscheinungsjahr: 2025
Publikationstyp: Diverses (Konferenzbeitrag)
Sprache: Englisch
Inhaltszusammenfassung
The recent development of LLMs has demonstrated their ability to generate coherent, contextually relevant responses to a variety of tasks. However, they also pose significant risks, including disinformation, factual inaccuracies and the propagation of bias. This research aims to access the effect of prompting techniques on the detection of different types of misinformation using data collected from Reddit. Our findings suggest that the Few Shot prompting method performs well across different ...The recent development of LLMs has demonstrated their ability to generate coherent, contextually relevant responses to a variety of tasks. However, they also pose significant risks, including disinformation, factual inaccuracies and the propagation of bias. This research aims to access the effect of prompting techniques on the detection of different types of misinformation using data collected from Reddit. Our findings suggest that the Few Shot prompting method performs well across different LLMs. However, the effect of positional bias found in our experiments indicates that prompt engineering needs to be further investigated for such a sensitive task.» weiterlesen» einklappen