UPDF AI

LLM generated responses to mitigate the impact of hate speech

Jakub Podolak,Szymon Lukasik,4 Authors,Piotr Sankowski

2023 · DOI: 10.18653/v1/2024.findings-emnlp.931
Conference on Empirical Methods in Natural Language Processing · 8 Citations

TLDR

The design of the automatic moderation system, a simple metric for measuring user engagement, and the methodology of conducting the first real-life A/B test assessing the effectiveness of LLM-generated counter-speech are outlined.

Abstract

In this study, we explore the use of Large Language Models (LLMs) to counteract hate speech. We conducted the first real-life A/B test assessing the effectiveness of LLM-generated counter-speech. During the experiment, we posted 753 automatically generated responses aimed at reducing user engagement under tweets that contained hate speech toward Ukrainian refugees in Poland. Our work shows that interventions with LLM-generated responses significantly decrease user engagement, particularly for original tweets with at least ten views, reducing it by over 20%. This paper outlines the design of our automatic moderation system, proposes a simple metric for measuring user engagement and details the methodology of conducting such an experiment. We discuss the ethical considerations and challenges in deploying generative AI for discourse moderation.