SOCIAL MEDIA ALGORITHM (DEVELOPED AI) COULD PREDICT AND PREVENT HATE CRIME


Police could be able to predict increases in hate crime and prevent them, thanks to a new social media-monitoring algorithm.

Data from Cardiff University’s HateLab project showed that as the number of “hate tweets” made from one location increased, so did the number of racially and religiously aggravated crimes in the real world – including violence, harassment and criminal damage.

Director of HateLab professor Matthew Williams said: “This is the first UK study to demonstrate a consistent link between Twitter hate speech targeting race and religion and racially and religiously aggravated offences that happen offline.

“Previous research has already established that major events can act as triggers for hate acts, but our analysis confirms this association is present even in the absence of such events.

“The research shows that online hate victimisation is part of a wider process of harm that can begin on social media and then migrate to the physical world.”

Computer scientists developed artificial intelligence to find 294,361 “hateful” Twitter posts during an eight-month period between August 2013 and August 2014.

A total of 6,572 racially and religiously aggravated crimes were also filtered out of police data. These figures, along with census data, were then placed into one of 4,720 geographical areas within London to allow researchers to pinpoint trends.

Researchers say an algorithm based on their methods could now be used by police to predict spikes of hate crime and stop them from happening by allocating more resources at specific periods.

Professor Williams said the study was the first in the UK to show a link between Twitter hate speech and racially and religiously aggravated offences happening offline: “Until recently, the seriousness of online hate speech has not been fully recognised. These statistics prove that activities which unfold in the virtual world should not be ignored.”

Williams also said that although the data was collected before the main social media companies introduced stricter hate speech policies, users were now using “more underground platforms”.

He added: “In time, our data science solutions will allow us to follow the hate wherever it goes.”

HateLab was set up to measure and counter the problem of hate speech online and offline across the world and has received more than £1.7m in funding from the Economic and Social Research Council (ESRC) and the US Department of Justice.

Comments