The Online Safety Bill – reported recommendations

Earlier this month the House of Lords and House of Commons Joint Committee on the Online Safety Bill released a report detailing their discussions and areas of improvement, the ‘report’, available here.

The Online Safety Bill, the ‘draft Bill’, published in draft in May 2021, hopes to fundamentally shift much of the UK’s webspace in order to attempt to combat online harms such as misinformation, fraud and other such topics.

In this article, we discuss the immediate consequences service providers will likely face if the committee’s recommendations are adopted.

What services does the Bill apply to

The Bill applies to providers of search services or user-to-user services, ‘Service Providers’, where those services are:

  1. available to individuals in the UK; and
  2. the resulting content could reasonably be believed to provide a material risk of significant harm to those individuals.

Some specific exemptions are listed, but are otherwise outside the scope of this article.

Service provider requirements

Design-level approach

Privacy by design should be a familiar phrase by now – the idea that the easiest way to protect personal data is by considering and accounting for it at the design phase of any project, website or service, to ensure appropriate protections are carried through to completion.

The report suggests a similar change revolving around online harms. This would include for example increased friction, i.e. number of steps needed, to share posts on Facebook in order to slow the spread of misinformation through viral media posts. This change is mentioned in the report as potentially being as effective as the whole fact-checking ecosystem.

Other proposed design steps include avoiding auto-playing content unless opted-in, and taking measures to ensure algorithmic content recommendation does not funnel users into content ‘rabbit holes’.

The suggested measures would be included as minimums in a mandatory code of practice, and could in theory require a redesign of all affected services.

Pseudonymity, anonymity and account creation

Many services offer anonymous or pseudonymous accounts, where the user cannot be identified by either other users or by the service itself. The report indicates that such accounts lead to an increase in online abuse, and can hinder investigation of online activity. A number of requirements are proposed, including allowing users to set how they interact (or don’t) with anonymous/pseudonymous accounts, and making it more onerous to create new accounts to prevent use of disposable ‘troll’ accounts.

In addition, the report calls for the creation of a private-sector ID verification and compliance industry to allow services to adequately identify their users. The report does note that controls will be needed to protect users who may have legitimate reason to be concerned about disclosure of their identity, such as activists in repressive states, or LGBT users in jurisdictions where homosexuality may be illegal.

New policies

Among other recommendations, the report also advises that service providers be required to introduce an Online Safety Policy, to sit alongside their privacy and cookie policies and terms of use. The policy should:

  • Explain how content is promoted and recommended to users;
  • Remind users of the type of activity and content that can be illegal online; and
  • Provide users with advice on what to do if targeted by content that may be criminal and/or in breach of terms of use and other related guidelines.


Service providers who may potentially be affected by the Bill should pay careful attention to its development, and start considering what changes they may need to make, including both policy introduction and service redesign.

If you have any queries about the report or the Bill, or any other matters mentioned in this article, contact Suzie Miles, partner in Ashfords' Technology team, at

Send us a message