HATE SPEECH CAN BE CONTAINED LIKE A COMPUTER VIRUS


The spread of hate speech via social media could one day be tackled using the same ‘quarantine’ approach deployed to detect and combat malicious software.
Advertisement click for more info

A study on the subject matter, conducted by an engineer and linguist from the University of Cambridge, used databases of threats and violent insults to build algorithms that can provide a score for the likelihood of an online message containing forms of hate speech.
As these algorithms get refined, potential hate speech could be identified and “quarantined”. Here, users would receive a warning alert with a ‘Hate O’Meter’ – a hate-speech severity score – the sender’s name, and an option to view the content or delete unseen.
This method is similar to spam and malware filters, and researchers from the Giving Voice to Digital Democracies project believe it could dramatically reduce the amount of hate speech people are experiencing. The team aim to have a prototype of the technology ready in early 2020.
“Many people don’t like the idea of an unelected corporation or micromanaging government deciding what we can and can’t say to each other,” said Tomalin.
“Our system will flag when you should be careful, but it’s always your call. It doesn’t stop people posting or viewing what they like, but it gives much-needed control to those being inundated with hate.”
Meanwhile, Ullman is gathering more “training data”: verified hate speech in which the algorithms can learn. And such data will help refine the “confidence scores” that determine a quarantine and subsequent Hate O’Meter read-out, which could be set similarly to a sensitivity dial – depending on users’ preferences.
A basic example of this score might involve a word like “bitch”: a misogynistic slur, but also a legitimate term under contexts such as dog breeding. The researchers said the algorithmic analysis of where such a word sits syntactically – the types of surrounding words and semantic relations between them – determines the hate speech score.
Advertisement click for more info
“Identifying individual keywords isn’t enough, we are looking at entire sentence structures and far beyond,” said Ullman. “Sociolinguistic information in user-profiles and posting histories can all help improve the classification process.”
Tomalin added: “Through automated quarantines that provide guidance on the strength of hateful content, we can empower those at the receiving end of the hate speech poisoning our online discourses.”
However, the duo, who work in Cambridge’s Centre for Research into Arts, Humanities and Social Sciences (CRASSH), said that – as with computer viruses – there will always be “an arms race” between hate speech and systems for limiting it.
The project conducted at the university has also begun to investigate “counter-speech”: the ways people respond to hate speech. The researchers also intend to feed into debates around how virtual assistants such as ‘Siri’ should respond to threats and intimidation.

Comments