This is the last article you can read this month
You can read more article this month
You can read more articles this month
Sorry your limit is up for this month
Reset on:
Please help support the Morning Star by subscribing here
THERE has been considerable pressure placed on the government to get the Online Safety Bill into a state where it might be able to be passed into law. But muddled thinking and inconsistency have dogged the Bill from the start, with the latest amendment on self-harm causing consternation within the mental health community from activists, service users, survivors and professionals.
What began as a well-intentioned campaign to enforce tech firms to act more responsibly in terms of content, algorithms, and age verification, is now weighted towards the responsibility (and potential criminalisation) of individual users of these platforms who may turn online to express their need for help or the extent of their distress.
The new amendment makes content that encourages someone to harm themselves illegal. This update is designed to bring self-harm content in line with the criminal offence that already exists in encouraging someone else to take their own life.
Tech platforms would be required to remove self-harm content where the intention of the individual posting it is to encourage others to self-harm and anyone posting such content could face prosecution.
There are two key problems with this.
First, determining what are the intentions behind a post will be all but impossible for tech platforms, risking an excessive blanket ban on anything that mentions self-harm. Second, the vagueness of descriptions of what counts as harmful or encouraging may result in prosecutions where the intention was not to cause harm but instead to give or seek support.
This point is something mental health activist “LP” is concerned about, with the potential impact of such measures criminalising online survivor support and survivors who are contacting professionals online.
Surrounding this Bill has been a discussion that directly links self-harm to suicide, but as LP said, “Many types of self-harm are a coping mechanism for those who are in mental distress but want to try to live and avoid suicide.
“If an individual is wanting help with their feelings and self-harm but they can’t access support in the community it is only natural that they may ask for help and support online, either asking peers on forums or directly to professionals. It is these people and those that help them, who are potentially facing harm from this Bill.”
With the new Bill focused on behaviour in private places online, the result may leave people feeling that there is nowhere to go. This is at a time when statutory and voluntary mental health services are swamped and are often unable to respond to increased needs in a timely or effective manner.
LP is also concerned about the very premise of what this amendment intends to remove: “In my activism on self-harm support sites, I’ve never observed goading or direct encouragement to self-harm or die by suicide. Experienced, older activists would never allow that.
“I don’t perceive it as an issue requiring legislation as it could criminalise survivors sharing essential harm-minimisation support. So, I fear this Bill will not make anything safer but drive crucial and life-saving conversation underground or silence it completely.”
That brings us to the core of the issue. If you listen to the debate surrounding this Bill, actions against the glamorisation of suicide and self-harm and prosecution of those who goad others to take their lives appear like laudable attempts to make platforms safer. But the real issues are more nuanced than the government would want us to believe, and this Bill misses an opportunity to address them fully.
Professor of Liaison Psychiatry Allan House said that concentrating on only harmful content misses other targets for possibly helpful interventions. One potential target is the amount of time someone might spend online.
“Not surprisingly, unhappy young people spend more time online,” House said. “It’s a two-way street and there is some evidence that time spent online is also a risk for developing depressive symptoms.
“The issue here is that algorithmic pushing is designed to keep you online and it may be that, in the context of self-harm material, time spent online is as important as what you’re doing or looking at online.
“This is something that is important and should be easy to measure, but the tech companies are likely to be reluctant because their business model is dependent on keeping individuals online.”
House thinks that the preoccupation with what people are looking at leads to insufficient attention being paid to this question of how they are using social media.
He draws a parallel to other situations where at-risk people can be identified by patterns of behaviour such as spending long periods online or what looks like ill-considered doubling up on bets in an attempt to recoup losses.
House also agrees with concerns about the lack of clarity in the definition of terms like “harmful” and “encouraging,” which so far has led to the banning of all images (even cartoons) on Instagram with little consideration of the effect of such a move.
The lack of clarity regarding definitions means much of the heavy lifting to get this imperfect Bill into a workable shape will be outsourced to Ofcom and the courts.
House believes this failure to look at other evidence about potential harms online raises the possibility of clumsy legislation.
The result could be legislation that does more harm than good by blocking the availability of online content that many isolated and unhappy people find helpful, without effectively tackling the areas where social media use should be better regulated.
Ruth F Hunt is an author and freelance journalist and Allan House is an emeritus professor in liaison psychiatry at Leeds University’s School of Medicine.
