30.1 Online harms
The Online Safety Act 2023, which came into force on 31 January 2024, operates by imposing various duties of care on social media platforms and commercial website operators with regard to their content. The Act will be implemented over three years from October 2023, with some measures – including age verification for pornography – needing to come in within 18 months. The aim of the Act is to make social media companies more accountable for illegal and harmful content seen on their platforms and to make sure that parents and children with clear and accessible ways to report problems online. Companies will also be required to prevent children from accessing harmful content by enforcing strict measures including age limitations. Those in scope will have to put in place systems and processes to improve user safety. The rules differ for each different type of service. Some services will have more duties than others to meet.
As well as UK service providers, the Act applies to providers of regulated services based outside the UK where they fall within scope of the Act, for example, because such services target the UK or they have a significant number of UK users. Companies in scope will either be categorised either as “Category 1” services or “Category 2” services where Category 1 services will include the largest platforms with the most users and will be subject to more onerous obligations.
As well as the duties which apply to all in-scope services under the Act, Category 1, 2A or 2B services will be required to comply with additional duties (depending on which category they fall into), which are designed to improve user empowerment, ensure protections for news publisher and journalistic content (see below), prevent fraudulent advertising, and increase transparency. Ofcom are currently consulting, via a call for evidence, on draft codes and guidance, which will follow in 2025.
Ofcom have also published their advice to Government on the thresholds for the categorised services. They have recommended that the thresholds for all three categories be set by reference to user numbers, but that Categories 1 and 2B should also consider the service's functionalities, including, in the case of Category 1 services, if it uses content recommender systems and/or allows users to forward or reshare user-generated content, or, in the case of Category 2B services, allows users to send direct messages. These functionalities mirror those Ofcom identified as being particularly high-risk in their own illegal content risk assessment.
The Act forces social media platforms to remove illegal content, stopping both children and adults from seeing it. The list of illegal content listed by the Act includes:
- child sexual abuse and exploitation
- extreme sexual violence
- fraud
- hate crime
- promoting or facilitating suicide
- revenge porn
- selling illegal drugs or weapons
- terrorism
The Act also created new offences, such as:
- a false communications offence, aimed at protecting individuals from any communications where the sender intended to cause harm by sending something knowingly false.
- A threatening communications offence, to capture communications which convey a threat of serious harm, such as grievous bodily harm or rape.
- Flashing offence, aimed at stopping epilepsy trolling.
- assisting or encouraging self-harm online.
As far as adult users are concerned:
- All in scope services will need to put in place measures to prevent their services being used for illegal activity and to remove illegal content when it does appear.
- Category 1 services (the largest and most-high risk services) must remove content that is banned by their own terms and conditions.
- Category 1 services must also empower their adult users with tools that give them greater control over the content that they see and who they engage with.
As far as children and young people are concerned, the Act makes social media companies legally responsible for keeping safe online. Social media platforms must:
- remove illegal content quickly or prevent it from appearing in the first place. This includes removing content promoting self-harm
- prevent children from accessing harmful and age-inappropriate content
- enforce age limits and age verification and assurance and checking measures to ensure that children cannot access services not designed for them.
- ensure the risks and dangers posed to children on the largest social media platforms are more transparent, including by publishing risk assessments
- provide parents and children with clear and accessible ways to report problems online when they do arise.
Ofcom are given sweeping and extensive enforcement powers, including:
- being able to require companies not meeting their obligations to put things right, impose fines of up to £18 million or 10% of global annual turnover (whichever is higher) or apply to court for business disruption measures (including blocking non-compliant services);
- being able to bring criminal sanctions against senior managers who fail to ensure their company complies with Ofcom’s information requests, or who deliberately destroy or withhold information, or against executives whose companies do not comply with child protection rules under the Act.
In March 2024, the first person to be convicted of cyberflashing in England, Nicholas Hawkes, was sentenced at Southend Crown Court to 66 weeks in prison. The offence of cyberflashing was criminalised by the 2023 Act. The offender sent an indecent photo of himself to a woman in her 60s using WhatsApp and a child through iMessage. The court also handed down a restraining order for the victims and a sexual harm prevention order banning him from approaching women he does not know in public for 15 years.
Safeguards for journalistic content
The larger platforms and commercial websites are obliged to safeguard journalistic content which is shared on their platforms. They will need to have systems in place to ensure they consider the importance of the free expression of journalistic content when operating their services. This means they will have to create a policy which counterbalances the importance of giving journalism free expression through a new ‘temporary must carry’ provision, to protect such content against other objectives which might otherwise lead to it being moderated or removed. Among other measures, companies will need to create expedited routes of appeal for journalists, so that they can submit appeals before their content is removed or moderated.
News publisher content on their own websites is the subject of an express exemption in section 56 of the Act, which exempts content produced by 'recognised news publishers' including the BBC and other new organisations from Ofcom regulation as long as the material is “news-related material” (which includes news or information about current affairs, opinion about matters relating to the news or current affairs, or gossip about celebrities, other public figures or other persons in the news), the organisation has a ‘standards code’ and policy and procedures around complaints handling and has a registered office or other business address in the United Kingdom. This means that belonging to a regulator and/or having an ethical code, as discussed in Chapters 2 and 3 of McNae’s will be particularly important. During the passage of the Act, the Government published a guidance factsheet (see useful websites) on enhanced protections for journalism within the Bill.
Useful websites and resources
A
guide to the Online Safety Bill, updated August 2023
Fact sheet on enhanced protections for journalism within the Online Safety Bill, published 23 August 2022