Skip to yearly menu bar Skip to main content


Poster
in
Workshop: I Can't Believe It's Not Better: Challenges in Applied Deep Learning

On the Limitations of LLM-Synthesized Social Media Misinformation Moderation

Sahajpreet Singh · Jiaying Wu · Svetlana Churina · Kokil Jaidka


Abstract:

Despite significant advances in Large Language Models (LLMs), their effectiveness in social media misinformation moderation -- specifically in generating high-quality moderation texts with accuracy, coherence, and citation reliability comparable to human efforts like Community Notes (CNs) on X -- remains an open question. In this work, we introduce ModBench, a real-world misinformation moderation benchmark consisting of tweets flagged as misleading alongside their corresponding human-written CNs. We evaluate representative open- and closed-source LLMs on ModBench, prompting them to generate CN-style moderation notes with access to human-written CN demonstrations and relevant web-sourced references utilized by CN creators. Our findings reveal persistent and significant flaws in LLM-generated moderation notes, signaling the continued necessity of incorporating trustworthy human-written information to ensure accurate and reliable misinformation moderation.

Chat is not available.