Assessing the impact of the Clearview AI decision - how clear is the future of the UK’s AI data protection law?

read time: 6 mins
02.04.24

In October 2023, the UK General Regulatory Chamber’s First-tier Tribunal allowed Clearview AI’s, an American facial recognition company, appeal against the Information Commissioner’s Office (ICO) decision.

The ICO’s decision was to fine the Manhattan-based company £7,552,800 and issue an enforcement notice demanding that Clearview stop obtaining and using the personal data of UK residents and delete their data from its systems. The ICO is now seeking permission to appeal the First-tier Tribunal’s rejection of their initial decision.

Whilst this latest appeal will turn on a relatively narrow point of law that won’t apply to most AI companies, the wider case has potentially very serious implications. In this article, we explore the Clearview case in detail and provide insight into the decision’s impact on AI companies. 

Why did the ICO take action against Clearview?

Clearview is a search engine for faces. A user will upload an image of a person’s face and Clearview’s AI system will search through over 20 billion images to find matches. It gathers the images from publicly available online information, although Clearview states that data will not be taken from social media accounts listed as private. The size of Clearview’s data set means that it likely held a significant amount of data relating to UK residents which was obtained without their knowledge or consent.  

Clearview has astonishing power, being able to detect faces in complex crowds and with remarkable accuracy. It also has tangible benefits, currently being used to help identify missing people and casualties of the Ukraine war.  

Whilst Clearview only takes on law enforcement clients, following a US privacy settlement in 2020, the ICO held Clearview’s use of UK residents’ personal data to be unacceptable. It therefore fined Clearview and issued the enforcement notice, citing multiple breaches of UK GDPR. This included a failure to obtain consent, a failure to have a lawful basis for processing and a failure to stop data being retained indefinitely, amongst others.  

The ICO’s enforcement notice and fine were successfully appealed by Clearview on the basis that Clearview now only works for security and law enforcement clients outside of the UK/EU and does not, and did not, operate a base in the UK. Whilst the tribunal held that Clearview’s activities did constitute the processing of personal data under UK GDPR, it also held that those activities were outside the scope of the EU GDPR in force at the time of the processing (and therefore also fall out of the newer UK GDPR). This meant that the ICO didn’t have jurisdiction to issue Clearview with a fine and enforcement notice. 

What is the impact of this decision on AI companies?  

To start with, the ICO has not taken the decision lying down and is currently seeking permission to appeal the tribunal’s decision. The ICO’s position is that Clearview itself was not processing personal data for law enforcement purposes, it was doing so for commercial services, albeit that it was only selling its services to law enforcement and security agencies. The key question for this latest appeal, therefore, is whether, as the ICO puts it, ‘commercial enterprises profiting from processing digital images of UK people, are entitled to claim that they are engaged in ‘law enforcement’’.

The ICO itself acknowledges that due to international law, data processing by a foreign government would not be within the scope of the regulations as one state cannot seek to bind another. 

However, it is this latter fact that narrowed the scope of the original appeal. Clearview was scraping, the technical term for data extraction, UK citizens’ personal data for the purpose of foreign law enforcement before the date at which the UK properly left the EU, known as IP completion day. This meant EU GDPR was still in place. EU GDPR did not apply to the processing of personal data in the course of an activity which falls outside the scope of EU law, to which foreign government processing is. This version of the GDPR would therefore not apply to Clearview’s activities. 

Positively for the ICO, whilst the tribunal in the initial appeal held that Clearview’s work fell outside the old EU law and therefore the ICO was unable to bind it to its decisions, they stated Clearview’s data processing would now be captured by the newer UK GDPR. The tribunal confirmed that the monitoring of data subjects by international companies is covered by UK GDPR as long as it relates to monitoring of subjects’ behaviour and if the subjects are inside the UK. The location of those monitoring the data is irrelevant in these circumstances. 

It is also interesting that, whilst UK GDPR focuses on ‘behaviours’ of data subjects, the tribunal highlighted that images of people engaged in an activity, such as sports or drinking, constituted behaviour monitoring. Similarly, to constitute monitoring, only a single instance of a behaviour needs to be captured. This is a broad analysis by the tribunal and will likely mean it is easier for the ICO to bring fines against similar generative AI companies that are taught on publicly available data.  

This is also positive for the ICO as the tribunal drew an inference that Clearview would have been taught on UK personal data, as it takes data from the internet and social media is used widely in the UK. Considering the rise of AI image generation such as OpenAI’s DALL-E 2, it is possible such applications will be targets for the ICO in the future if their ‘teaching data’ is not expressly restricted to tight parameters. Indeed, the ICO has repeatedly advised that firms should consider data protection at the outset of designing an AI tool so that relevant protections are built in.

What can AI companies learn from this?

Effectively, without a base in the UK and by its tight client focus on law enforcement, Clearview has currently avoided the ICO’s fine. It is unlikely that comparable generative AI systems will also do so. Considering the initial appeal decision, it will be even more important for generative AI to consider their legitimate requirements for data processing, especially when it comes to biometric data such as facial recognition. Any failure to do so, such as failing to ensure that there is a lawful basis for processing the personal data in question or retaining data indemnity, could result in a substantial fine or an enforcement notice from the ICO.

It will be telling to see if the ICO is granted permission for their appeal and whether this results in any further clarification on the points raised by the First-tier Tribunal.

If further advice on the data protection considerations in AI is required, please contact our data protection team.

Find out more

If you are interested in how AI will change business and the law, visit and bookmark our Spotlight AI hub for more AI insights. The Hub brings together commentary from Ashfords’ experts, our clients and our contacts across a wide range of areas; looking at how AI might impact as its use evolves. 

Please do also get in touch if there are any specific areas related to AI that you’d be interested in hearing more about. 

Visit our AI spotlight area
AI Social Graphic

Sign up for legal insights

We produce a range of insights and publications to help keep our clients up-to-date with legal and sector developments.  

Sign up