Report

Description

To inform the discussion over free speech and hate speech, this study examines the way racial, religious and ethnic slurs are employed on Twitter.

Executive summary: How to define the limits of free speech is a central debate in most modern democracies. This is particularly difficult in relation to hateful, abusive and racist speech. The pattern of hate speech is complex. But there is increasing focus on the volume and nature of hateful or racist speech taking place online; and new modes of communication mean it is easier than ever to find and capture this type of language.

How and whether to respond to certain types of language use without curbing freedom of expression in this online space is a significant question for policy makers, civil society groups, law enforcement agencies and others. This short study aims to inform these difficult decisions by examining specifically the way racial and ethnic slurs (henceforth, ‘slurs’) are used on the popular microblogging site, Twitter.

Slurs relate specifically to a set of words, terms, or nicknames which are used to refer to groups in a society in a derogatory, pejorative or insulting manner. Slurs can be used in a hateful way, but that is not always the case. Therefore, this research is not about hate speech per se, but about epistemology and linguistics: word use and meaning.

In this study, we aim to answer two following questions:

(a) In what ways are slurs being used on Twitter, and in what volume?

(b) What is the potential for automated machine learning techniques to accurately identify and classify slurs?

Publication Details
Published year only: 
2014
4
Share
Share
Geographic Coverage
Advertisement