Computation, Vol. 13, Pages 196: Beyond Traditional Classifiers: Evaluating Large Language Models for Robust Hate Speech Detection


Computation, Vol. 13, Pages 196: Beyond Traditional Classifiers: Evaluating Large Language Models for Robust Hate Speech Detection

Computation doi: 10.3390/computation13080196

Authors:
Basel Barakat
Sardar Jaf

Hate speech detection remains a significant challenge due to the nuanced and context-dependent nature of hateful language. Traditional classifiers, trained on specialized corpora, often struggle to accurately identify subtle or manipulated hate speech. This paper explores the potential of utilizing large language models (LLMs) to address these limitations. By leveraging their extensive training on diverse texts, LLMs demonstrate a superior ability to understand context, which is crucial for effective hate speech detection. We conduct a comprehensive evaluation of various LLMs on both binary and multi-label hate speech datasets to assess their performance. Our findings aim to clarify the extent to which LLMs can enhance hate speech classification accuracy, particularly in complex and challenging cases.



Source link

Basel Barakat www.mdpi.com