Can ChatGPT be used meaningfully in a society that puts privacy first?

read time: 6 mins

Generative artificial intelligence (AI) models, such as OpenAI’s ChatGPT, are a hot topic. Generative AI is AI which is able to generate content (for example text, images or audio) as an output, in response to the input data that is fed to the model. This input data is also commonly referred to as training data, as it is this data which trains the AI model.

Utilising AI has clear benefits, for example increasing efficiency and cost savings. However, there are also wide ranging concerns about the negative impacts for individuals and society as a whole. These include privacy, confidentiality and intellectual property infringement risks, as well as ethical concerns arising as a result of AI biases and the economic harm of job roles being replaced. Not to mention the warnings issued by the likes of Sundar Pichai and Geoffrey Hinton from Google, regarding the dangers of AI due to the speed at which it is moving and the possibility that AI models could one day become more intelligent than us.

While businesses need to be alive to the wider considerations mentioned above, in this article we focus specifically on the privacy considerations for those looking to utilise generative AI tools such as ChatGPT.

What are the privacy concerns with ChatGPT?

On 31 March 2023, the Italian Data Protection Authority, the Garante, issued a temporary ban on ChatGPT. Its key concerns were that insufficient information had been provided to individuals about OpenAI’s use of their personal data, OpenAI has no lawful basis for using vast volumes of personal data to train ChatGPT and ChatGPT can produce inaccurate information about people.

A month later and the temporary ban has been lifted due to OpenAI co-operating and responding to the Garante’s objections. In particular OpenAI has improved its privacy policy, enabled data subjects to request deletion of information they consider to be inaccurate and also confirmed that data subjects were able to opt-out of their information being used to train the AI model by completing an online form.

Whilst these measures do go some way to improving the protection afforded to data subjects, they have not resolved wider privacy concerns surrounding generative AI. Generative AI models have been trained using publicly available data, including personal data which is likely to have been collected unlawfully, and therefore it is no surprise that these tools will be subject to extensive regulator scrutiny.

Other European privacy regulators have stopped short of implementing bans on ChatGPT but have requested further information from OpenAI. This therefore won’t be the last that we hear about ChatGPT’s privacy compliance. The European Data Protection Board has also set up a task force to co-ordinate enforcement action in respect of ChatGPT across European privacy regulators.

Meanwhile, in the UK, the Information Commissioner’s Office (ICO) has issued guidance regarding responsible use of generative AI which aligns with the concerns of European privacy regulators. The ICO has confirmed that it will be asking questions of organisations that are developing or using generative AI and that it will take action where organisations are not following the law or considering the impact on individuals.

What do businesses need to think about when utilising generative AI?

There is no AI-specific privacy regime. Businesses simply need to apply existing data protection obligations to AI-based data processing; this is what the ICO expects to see when organisations are either developing or using generative AI.

  • If opting to utilise OpenAI tools, ensure that you are properly signed up to OpenAI’s business service offerings. OpenAI provide commercial APIs, which are different to its publicly available ChatGPT for consumers. Businesses are required to complete an online form to request a copy of OpenAI’s Data Processing Agreement for its business service offerings. It is important to execute this prior to using OpenAI’s commercial services, in the same way that data processing provisions need to be implemented with any other data processor. Additionally, OpenAI’s Terms of Use specify that it does not use content that users provide to or receive from its API to develop or improve its services. This is an important distinction from ChatGPT, where training data can be viewed by OpenAI personnel for the purposes of developing the AI model.
  • Implement policies and procedures to govern the type of data that personnel can input into the AI model. Prohibiting personnel from inputting personal data into the AI model is an immediate way to manage the data protection risks. Similarly, prohibiting personnel from inputting any confidential or proprietary information will mitigate the risk of your confidential information being leaked or your intellectual property being disclosed. However businesses will also need to consider how this may bias any AI output.
  • Ensure that your chosen AI model does not contravene your security accreditations. Many companies are prohibiting their workforce from using ChatGPT and similar generative AI models on the basis that whilst they remain in beta version, use for company business would be a contravention of security accreditations. This will also be an important consideration for businesses which are working towards certain security accreditations.
  • Ensure that the AI model allows you to comply with data deletion obligations. Data deletion functionality is crucial in order to fulfil data deletion requests, comply with the UK GDPR storage limitation principle and comply with obligations to delete all personal data at the end of services you are providing as a processor or sub-processor to another entity. The inability to delete items from AI chat history is a compliance concern that a lot of critics have raised. Although it is worth noting that as part of its response to the Italian Data Protection Authority’s temporary ban on ChatGPT, OpenAI has recently introduced mechanisms for ChatGPT to enable data subjects to request deletion of information they consider inaccurate.
  • Complete a data protection impact assessment (DPIA). DPIAs are required for high risk processing activities. ICO guidance is clear that DPIAs are important when rolling out AI technology, to assess and mitigate data protection risks involved. If you are setting parameters to prohibit personnel from inputting personal data into the AI system, this will be an important mitigation within your DPIA.
  • Ensure that you comply with transparency obligations. It is important to provide clear and transparent information about the use of personal data in connection with AI.

It is easy to see why there is significant pressure for meaningful AI regulation. Law makers have struggled to keep pace with fast developments within the AI sector, however an AI regulatory framework is on the horizon both within the UK and EU.

For now though, where privacy is concerned, businesses need to focus on ensuring that their use of AI tools complies with applicable data protection laws, whilst also keeping abreast of the outcomes of privacy regulator investigations into generative AI tools such as ChatGPT.

For more information on the article above contact Hannah Pettit or Suzie Miles.

Sign up for legal insights

We produce a range of insights and publications to help keep our clients up-to-date with legal and sector developments.  

Sign up