Considering professional negligence in the use of AI - whose fault is it anyway?

read time: 5 mins
21.03.24

It is becoming increasingly clear that AI will have a significant impact on the delivery of professional services. On some level AI is already able to simulate human intelligence and imitate complex decision making processes. Therefore, those who provide advice for a living, including accountants, independent financial advisors, surveyors, consultants and many others, should expect a major shakeup in the way their services are delivered to clients in the not so distant future. 

Within this developing landscape a key source of debate and speculation is professional liability. Whose fault is it if the AI system fails or comes up with the wrong answer and the client suffers loss as a result? In this article, we begin to explore this point and review the government white paper: ‘A pro-innovation approach to AI regulation’.

Effective engagement with AI

Professionals might use AI simply to answer questions or help with research or to rationalise large volumes of data. Alternatively they may seek to embrace this technology on a more fundamental level, by asking AI to produce bespoke advice directly for their clients. 

Effective engagement with AI within professional practice will soon become a necessity if professionals wish to compete. Given the potential of AI to reduce human error and enhance the quality of the output, two contentious topics in their own right, some argue that in time a professional might be negligent if they don’t suitably employ AI within their practice.

What does the ‘A pro-innovation approach to AI regulation’ white paper tell us?

It is likely that case law will develop around the tort of negligence to determine where the line should be drawn between the liability of the professional user and the liability of those developing or supplying the AI. The government's white paper ‘A pro-innovation approach to AI regulation’ published on 29 March 2023, touches on the existing legal framework's suitability for addressing the emerging risks posed by AI technologies. 

In the context of consumer law, this paper acknowledges ‘it is not yet clear whether consumer rights law will provide the right level of protection in the context of products that include integrated AI or services based on AI, or how tort law may apply to fill any gap in consumer rights law protection’. Nevertheless, despite this acknowledgement the government claims it is too early to make firm decisions about the apportionment of liability, considering the issue too complex. As the name of the paper suggests, the government is anxious not to legislate in a way which could hamper innovation. 

For the time being at least, the government appears reluctant to impose new, overarching legislation governing the use of AI in the UK. Instead, exploring the establishment of cross-sectoral principles for the UK’s existing regulators to interpret and apply within their remits. This approach will be advanced in conjunction with the creation of a ‘central function’, to support regulator co-ordination, knowledge exchange and to avoid regulatory gaps or overlap. 

The government is however contemplating more intervention with the development of highly-capable general-purpose AI systems, seeming to fear that the risk these systems pose may not be appropriately managed through the existing legal framework. 

How can professionals protect themselves if AI goes wrong?

There appears to be an appreciation that those most capable of managing and mitigating the risk of these systems, the developers, could be left unaccountable when things go wrong and this could lead to unfair outcomes. This can include claims being brought against those using these systems to provide services to their clients. This fear has been expressed in the government's recent response to the aforementioned whitepaper, which was published on 6 February 2024. 

Despite the current uncertainty as to how the law will develop in the UK, professionals using AI today should seek to take all reasonable steps to ensure AI produces a reliable outcome. Professionals will need to train the AI and the individuals using it, whilst making sure the appropriate AI is used for the task in hand and deploy a suitable quality control process. If ever scrutinised by a court, a professional would want to be able to justify their decisions in using, or continuing to use, the AI for the matter in question. 

There isn’t any doubt that keeping detailed records of their actions and decision making processes, relative to their assessment of risk would be sensible, particularly whilst a professional is embracing the use of a new technology for the first time or is still becoming acquainted with it.

AI sometimes makes mistakes and its creators simply don’t know why. This is an inherent quirk of many AI systems and is commonly referred to as the ‘black box problem’. In such cases, some commentators have argued there might be no liability at all, on the basis of a lack of reasonable foreseeability. Professionals have never been held to a standard of perfection, and neither should they be. However, from a practical perspective it will be interesting to see whether, in time, any form of no fault/strict liability regime will develop and be imposed on professionals using AI, particularly where liability is underwritten by professional indemnity insurance.

What can professionals take away from this?

Professional codes of practice will soon be developed by various industry regulators regarding the use of AI and it remains to be seen how these rules will impact on the standards to which professionals are held when using this technology. The UK government has written to a number of regulators asking them to publish an update outlining their strategic approach to AI by 30 April 2024. Those providing any form of professional services should look out for future updates relevant to their sector, whilst keeping abreast of the government's plans in this rapidly developing area. 

Although beyond the scope of this article, those looking to leverage AI for the provision of services within the EU should also seek to understand the implications of the Artificial Intelligence Act, anticipated to come into force in 2026. Whilst this act appears to apportion liability primarily with those developing AI systems, it does impose certain obligations on their users, or ‘deployers’. In certain cases, professionals may need to perform impact assessments on fundamental rights and/or ensure their clients are suitably notified that AI is being leveraged in delivery of the services.

For more information, please contact Hugh Houlston

Find out more

If you are interested in how AI will change business and the law, visit and bookmark our Spotlight AI hub for more AI insights. The Hub brings together commentary from Ashfords’ experts, our clients and our contacts across a wide range of areas; looking at how AI might impact as its use evolves. 

Please do also get in touch if there are any specific areas related to AI that you’d be interested in hearing more about. 

Visit our AI spotlight area
AI Social Graphic

Sign up for legal insights

We produce a range of insights and publications to help keep our clients up-to-date with legal and sector developments.  

Sign up