Artificial intelligence (AI) is a powerful device that is ultimately designed to impersonate and in most cases, financial fraud involves impersonation.
This means that AI has the very serious potential to be used for financial fraud and equip fraudsters with additional and more sophisticated tools to make their attacks more convincing, efficient and fast.
AI can be used to impersonate people, automate phishing attacks and manipulate data. It is important to have strong security measures in place to ensure that AI is not used for fraudulent purposes and to protect against any potential risks. Additionally, it is crucial to have ethical guidelines in place that govern the use of AI, to prevent any misuse or abuse of the technology.
Regulation of AI is, of course, a current hot topic. The government published a white paper in March this year on what it proposes will be its “pro-innovation” approach to this, but AI is already everywhere. The white paper itself admits that “the pace of change itself can be unsettling.” In the meantime, self-help is the best (and only) defence.
At the consumer end, AI can be used to generate scripts that can be read out over the telephone and used to scam people into making bank transfers.
On a business level and of more concern to lenders, banks and other financial firms, generative AI can create photographs and videos of people who do not exist. Most of us are already familiar with the term “deepfakes” and have seen deepfake videos which are so convincing. In other words, AI can create “evidence” to pass identity checks that could be used to effect transactions, open accounts and even demonstrate the existence of (fake) liquidity or assets against which borrowing can be secured.
The risks posed by AI for financial fraud are staggering and extremely concerning.
At the very least, firms who may be at risk should:
Whilst the risks posed by AI are extremely concerning, on the flipside, what is becoming more and more clear is that AI may be used for fraud detection and prevention.
The key benefits include:
There are, however, risks of using AI in fraud detection. These include:
Explainable AI can help to partly overcome some of the risk factors. The term “explainable AI” refers to the development of AI systems that can explain their decision-making processes in a way that can be understood by the average person. For example, AI could be used to assess loan risk and explainable AI will not only assess the risk, but also explain the decision/outcome.
The use of AI in the fight against financial fraud isn’t new. The fight, though, has got a whole lot tougher since the COVID-19 pandemic which was a driver of considerable change in the world of digital transactions and threat actors have taken note. Fraudulent activity is not only becoming more sophisticated but is increasing in scale and sheer numbers. It follows then, that the demand for effective AI solutions to help combat them is greater than ever.
In a fast-paced digital world, it is clear that AI will be a key player in the future of financial fraud defence and attack. All but the fraudsters will of course be rooting for the defence side to prevail. There are already some indications that we will reach a point where suspected fraudulent behaviour is detected and prevented before it even occurs. That is the focus of AI’s next iteration - simulation modelling. That is also for another article, on another day!
If you are interested in how AI will change business and the law, visit and bookmark our Spotlight AI hub for more AI insights. The Hub brings together commentary from Ashfords’ experts, our clients and our contacts across a wide range of areas; looking at how AI might impact as its use evolves.
Please do also get in touch if there are any specific areas related to AI that you’d be interested in hearing more about.Visit our AI spotlight area