Journal

Why the Online Safety Bill doesn’t go far enough

By on
3

Cambridge University law student Nathan Silver assesses the limitations of the new draft legislation

In the aftermath of the Euro 2020 final between England and Italy, three young Black footballers — Marcus Rashford, Jadon Sancho and Bukayo Saka — received appalling online racial abuse.

Unfortunately, this is a common occurrence. While online abuse has been a problem since the internet was created, it seems to have intensified recently, with Black footballers often the victims. This was the most high-profile incident yet, and has resulted in a petition which has amassed over one million signatures, supporting the banning of fans who post racist material from attending games for life.

More recently, Love Island contestant Kaz Kamwi received racist abuse on Instagram when viewers disagreed with her decisions on the show. Her family, controlling the account whilst Kamwi is in the villa, had to release a statement reminding people to be kind.

While the instigators of the abuse are of course to blame, social media platforms could do more. Their main strategy to combat hate crime seems to be the removal of racist content from their sites, with Facebook posting in the aftermath of the final that it “quickly removed comments and accounts directing abuse at England’s footballers”. Twitter removed over 1,000 posts and blocked accounts sharing hateful content within 24 hours. Even so, this is not always successful. Visiting one of the player’s pages shortly after the game revealed swathes of racist emojis and comments despite the site’s best efforts.

The crux of the issue is the availability of anonymous accounts, which freely allow individuals to abuse without fear of consequence. Most sites, such as Facebook and Instagram, not only allow the creation of anonymous accounts, but have no mechanism to prevent the owner from creating a new one, after being removed from the site for racism. To sign up to Instagram one must only provide an email or phone number. Offenders, who have been flagged by the sites and removed, can easily continue to abuse online through a different account simply by creating a new email address, or using a different number. Even if an offender’s IP address (a unique address that identifies a device on the internet or local network), is blocked, they can easily change it and create a new account.

The UK government has recognised the need for online platforms to better regulate their sites. It published the Internet Safety Strategy Green Paper in October 2017, which aimed to “ensure Britain is the safest place in the world to be online”. This evolved into the Online Harms White Paper, before finding its final form in the draft Online Safety Bill (the bill). The bill is currently being scrutinised by the Joint Committee on the bill, required to report its findings by 10 December 2021 with the aim of the bill becoming law in 2023 or thereabouts.

The bill seeks to appoint Ofcom as an independent regulator of certain “regulated services”, meaning a regulated “user-to-user” or “search” service. This regulator will impose duties of care on providers of regulated services, such as, but not limited to, risk, safety, and record-keeping duties. Ofcom will have the power to fine companies up to £18 million pounds, or 10% of their annual turnover (whichever is higher), and even block access to their sites by users, if they fail to fulfil their duties.

Want to write for the Legal Cheek Journal?

Find out more

The bill is ambitious, and the UK will become the first country to regulate social media platforms in this way if it becomes law. Oliver Dowden, secretary of state for digital, culture, media and sport, claims that the bill will “crackdown on racist abuse on social media”. But online racism appears to be an afterthought. The Online Harms White Paper was criticised for covering a “disparate array of ills” in a recent article. It aimed to tackle hate crime alongside child sexual exploitation and abuse, terrorism, sale of illegal drugs and weapons, as well as harmful content towards children and legal but harmful content. The White Paper presented a risk of racism being forgotten among other online harms. But at least it mentioned hate crime. The Draft Safety Bill does not mention the words ‘race’, ‘racism’ or ‘hate crime’ at all.

Instead, it is subsumed under a general ‘duty to protect adults’, which requires platforms to specify how they will deal with harmful content. The bill also imposes, under “reporting and redress duties”, a duty to allow the easy reporting of content the platform considers harmful, which provides for action against offenders, and is easy to access. But these provisions are incredibly vague, failing to detail specifically how “huge volumes of racism, misogyny, anti-Semitism… will be addressed”. It is possible that, after the review by the Joint Committee, concrete and clear steps to combat racism will be published. But it is concerning that a bill which has been presented as a tool for tackling online racism fails to mention it at all.

The bill also fails to tackle the issue of anonymous accounts. Nicola Roberts, former Girls Aloud star, who herself has suffered online abuse, refused to endorse the bill. Instead she criticised it, claiming that it had “failed to combat the problem of someone’s account being taken down only for them to start a new one under a different name”. And she is right; the bill fails to address the root of the problem. As Roberts puts it, the bill is seeking to “chase the rabbit” rather than “fill the hole”.

As much as social media companies can better inform their users, improve the mechanisms for reporting abuse, and remove abusive messages more quickly, unless anonymous accounts are tackled head on, offenders remain able to abuse others without fear of consequence. To be effective, the bill must focus more on preventing racist abuse in the first place, rather than putting in better mechanisms for reporting accounts and removing comments after the event. Requiring users to sign up using their ID will help with prevention. Individuals will face the real prospect of an employer, or perhaps parents (in the case of children) being contacted, a potential ban from using the platform for life, or even criminal proceedings, rather than a slap on the wrist.

Some interested parties worry about the privacy issues or potential to target vulnerable people, including children, connected with any loss of anonymity. But it is worth noting that a user would not be required to have their real name posted online. Rather, there could simply be the requirement for the social media platform to have access to a user’s real name in the event of an offence, meaning any worries about lack of privacy or children’s safety online would be unfounded. There remain data protection issues associated with the handing over of personal details to a social media company, which require addressing. But this is a small price to pay to better protect users of these platforms from online abuse and racism.

The bill’s motives are clearly to be applauded. Ending the self-regulation of social media companies is certainly a step in the right direction to making the internet a safer place. But if the bill is to seriously tackle the specific issue of online racism, it must highlight it, to ensure it does not become forgotten among a sea of other aims, and it must commit to imposing real repercussions on offenders by ending anonymous accounts.

Nathan Silver a second year law student at Magdalene College, Cambridge.

Want to write for the Legal Cheek Journal?

Find out more

Please bear in mind that the authors of many Legal Cheek Journal pieces are at the beginning of their career. We'd be grateful if you could keep your comments constructive.

3 Comments

Anon

I suppose the idea is that if a platform is failing to combat poor behaviour through conventional methods, the threat of regulatory action will cause that platform to adopt the kind of measures you suggest, without them being bluntly prescribed.

(4)(0)

Anonymous

Anonymous accounts aren’t the problem, indeed these facilitate freedom of speech.

The solution is to make the platforms responsible for content published on them.

(12)(1)

Anonymous

I’ve noticed an increase in commentary that focused on these emotionally charged and politically opportune, yet generally insignificant examples of poor behavior to call for widespread limitations on the wider public… of-course it is only to counter the aforementioned statistically insignificant bad behaviour and will not in anyway be abused…

(14)(0)

Comments are closed.

Related Stories