ToxCCIn: Toxic Content Classification with Interpretability

Xiang, T., MacAvaney, S. , Yang, E. and Goharian, N. (2021) ToxCCIn: Toxic Content Classification with Interpretability. In: 11th Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis (WASSA 2021), 19-23 Apr 2021,

[img] Text
234947.pdf - Published Version
Available under License Creative Commons Attribution.

403kB

Publisher's URL: https://www.aclweb.org/anthology/2021.wassa-1.1/

Abstract

Despite the recent successes of transformer-based models in terms of effectiveness on a variety of tasks, their decisions often remain opaque to humans. Explanations are particularly important for tasks like offensive language or toxicity detection on social media because a manual appeal process is often in place to dispute automatically flagged content. In this work, we propose a technique to improve the interpretability of these models, based on a simple and powerful assumption: a post is at least as toxic as its most toxic span. We incorporate this assumption into transformer models by scoring a post based on the maximum toxicity of its spans and augmenting the training process to identify correct spans. We find this approach effective and can produce explanations that exceed the quality of those provided by Logistic Regression analysis (often regarded as a highly-interpretable model), according to a human study.

Item Type:Conference Proceedings
Status:Published
Refereed:Yes
Glasgow Author(s) Enlighten ID:MacAvaney, Dr Sean
Authors: Xiang, T., MacAvaney, S., Yang, E., and Goharian, N.
College/School:College of Science and Engineering > School of Computing Science
Copyright Holders:Copyright © 2021 The Authors
Publisher Policy:Reproduced under a Creative Commons licence
Related URLs:

University Staff: Request a correction | Enlighten Editors: Update this record