Skip to main content

Time to rein in hate speech

RAVISHAAN RAHEL MUTHIAH agues that much as social media are presently guilty of fanning hostilities, the roots of it lie with decades of xenophobic narratives peddled by politicians and the traditional media

WE’RE entering the age of “Extreme Information Disorder,” and minorities should be very afraid.

I work for the Joint Council for the Welfare of Immigrants (JCWI), a charity whose aim is to speak up for migrant rights in the UK. I was recently asked to give evidence to a parliamentary inquiry on “Social media, misinformation and harmful algorithms,” which was set up in the aftermath of the August 2024 riots.

Over the years, we’ve been battling the narrative that led to the explosion of rage that month.

While the racist riots shook the country, for organisations like ours, this was the logical culmination of years of racist hostile environment policies and language.

Here is an example, our client Samiyah (not her real name) came to the UK seeking asylum. She was staying in an asylum hotel during the riots, and saw the crowds from her window. Samiyah was lucky enough not to come to physical harm during the riots and was granted refugee status soon after, but the experience was deeply traumatic.

Rumours sparking the August riots were spread on social media and then, social media was used as a tool by the far right to organise. Wth the recent changes in the digital landscape, far-right groups are spreading mis and disinformation on an even larger scale.

In my testimony, I talked about how, with X and Meta relaxing their fact-checking functions, reducing restrictions on topics such as immigration and with Donald Trump hand in glove with Elon Musk, we are at risk of a new age of unmoderated, extreme content.

We recently conducted research on Facebook misinformation. The findings were deeply concerning. There is an industrial volume of mis and disinformation on the platform.

We have seen the language of right-wing influencers parroted by the media and then these same anti-migrant sentiments repeatedly referenced in social media comments using language around “mass deportations,” “illegal invaders,” “scroungers,” and “parasites.”

The government’s pronouncements are also repeated: “stop the boats” is a popular one.

We have seen this hatred spilling into the actual incitement of violence – with calls to sink boats in the Channel and references to putting migrants in concentration camps. One user said: “round them up and send them back in the boats they came in.”

What’s even more terrifying is that these comments would previously have fallen into Meta’s category of hate speech but it’s unclear where they now lie. Unbridled hate speech is something we have already seen on X, and its link with last summer’s violence is noticeable.

As these comments go unregulated, this adds a new element of risk for organisations likes ours. The negative feedback loop of the social media algorithms feeds extreme politics, but it has deadly real-life consequences. 

Digital misinformation and disinformation spreads exactly like a virus would. Just like with viruses there are the superspreaders and in the case of the riots they included the likes of Tommy Robinson, politicians and tabloid newspapers.

This information then mutates and changes as it is amplified by more people, becoming more potent and gaining more cut-through – which is aided by algorithms – creating “extreme information disorder.”

Social media companies and search engines that rely on engagement and impressions to boost advertising revenue are less likely to moderate “high engagement” content, even if this content is harmful. Such content generates revenue and engagement for social media companies, so they willingly turn a blind eye.

It’s clear that the business models of social media companies – at the very least – enable anti-migrant language to flow, and more often, inflame it. This ultimately leads to those with the loudest and most polarising views receiving the most digital “airtime.”
   
This is why we have seen more and more digital content which targets the communities JCWI serves. Since Elon Musk’s takeover of Twitter, this has spiralled exponentially.  

But this did not begin with social media. In the case of the August 2024 riots, X spread the message, but its origins lie sadly with a long history of anti-migrant and racist sentiment in Britain.

The message at its core is the same, it’s just spreading faster. Far-right radicalisation is creating digital communities of hate, while disinformation spreaders prey on vulnerable people across the country.

Platforms must be controlled but so must politicians, traditional media and other sources of disinformation and hate speech. Controlling hate speech and AI generated fake anti-migrant content is where we ought to start.

Ofcom must hold social media companies who do not moderate racist and harmful content to account and there must be clear, severe penalties for allowing the spread of disinformation.  

But this is not enough. Social media may have lit the tinder box, but the fire was started by our politicians and mainstream media. This is why I told the inquiry that government attempts to tackle the social media problem must also account for wider issues. This means making a choice between keeping the likes of Elon Musk and Mark Zuckerberg happy — or our communities safe.

And it means mainstream politicians acknowledging that it is their flirtations with populism — whether that be the “Go Home” vans of previous governments or the “smash the gangs” rhetoric of Keir Starmer — that have led us to the dark place we are in today.  

Ravishaan Rahel Muthiah is director of communications at the Joint Council for the Welfare of Immigrants.

 

 

OWNED BY OUR READERS

We're a reader-owned co-operative, which means you can become part of the paper too by buying shares in the People’s Press Printing Society.

 

 

Become a supporter

Fighting fund

You've Raised:£ 7,485
We need:£ 10,515
18 Days remaining
Donate today