Artificial intelligence and financial fraud - defence and attack

read time: 5 mins
19.09.23

Artificial intelligence (AI) is a powerful device that is ultimately designed to impersonate and in most cases, financial fraud involves impersonation. 

This means that AI has the very serious potential to be used for financial fraud and equip fraudsters with additional and more sophisticated tools to make their attacks more convincing, efficient and fast.

AI can be used to impersonate people, automate phishing attacks and manipulate data. It is important to have strong security measures in place to ensure that AI is not used for fraudulent purposes and to protect against any potential risks. Additionally, it is crucial to have ethical guidelines in place that govern the use of AI, to prevent any misuse or abuse of the technology.

Regulation of AI is, of course, a current hot topic. The government published a white paper in March this year on what it proposes will be its “pro-innovation” approach to this, but AI is already everywhere. The white paper itself admits that “the pace of change itself can be unsettling.” In the meantime, self-help is the best (and only) defence.

Examples of how AI can be used in financial fraud

At the consumer end, AI can be used to generate scripts that can be read out over the telephone and used to scam people into making bank transfers. 

On a business level and of more concern to lenders, banks and other financial firms, generative AI can create photographs and videos of people who do not exist. Most of us are already familiar with the term “deepfakes” and have seen deepfake videos which are so convincing. In other words, AI can create “evidence” to pass identity checks that could be used to effect transactions, open accounts and even demonstrate the existence of (fake) liquidity or assets against which borrowing can be secured. 

So what can be done about it?

The risks posed by AI for financial fraud are staggering and extremely concerning. 

At the very least, firms who may be at risk should:

  1. Take additional steps to scrutinise the authenticity of all identifying documentation provided for anti-money laundering (AML) and know your customer (KYC). Where possible, information should be sought from third parties, for example public registries, companies house or verification firms, rather than taken directly. 
  2. Ensure that when pre-existing customers and clients are dealt with, continuing steps are taken to ensure that the person is not being “spoofed”. This will include, for example, a personal verbal password, multi-factor authentication. Ensuring a face-to-face meeting at the outset of the relationship and nominating a client or customer manager to conduct that meeting and oversee the continuing relationship, would also be a sensible protective measure.  
  3. Train vulnerable staff on the patterns that can signify financial fraud. Transactions that are unexplained or out of character or borrowing that has no obvious purpose should all be suspected, no matter how convincing the supporting documentation may appear to be.

What are the benefits of using AI in fraud detection?

Whilst the risks posed by AI are extremely concerning, on the flipside, what is becoming more and more clear is that AI may be used for fraud detection and prevention. 

The key benefits include:

  • Enhanced accuracy: AI algorithms can analyse vast amounts of data and identify patterns and anomalies that are difficult for humans to detect. AI algorithms can even learn from data and improve over time, increasing accuracy. Adapting to changing threats by scaling fraud data without some form of AI is a heavy burden for analysts to bear. Human error and a rules-alone approach can account for high numbers of false positives, which only serve to negatively impact the customer journey.
  • Real-time monitoring: With AI algorithms, organisations can monitor real-time transactions, allowing for immediate detection of anything that appears to be unusual and an immediate response to potential fraud attempts.
  • Increased efficiency: AI algorithms can automate repetitive tasks, such as reviewing transactions or verifying identities, reducing the need for manual intervention.
  • Reduced false positives: One of the challenges of fraud detection is the occurrence of false positives, where legitimate transactions are mistakenly flagged as fraudulent. The learning feature of AI algorithms reduces false positives.
  • Cost reduction: Fraudulent activities can have significant financial and reputational consequences for organisations. By reducing the number of fraudulent cases, AI algorithms can save organisations money and protect their reputation.

There are, however, risks of using AI in fraud detection. These include:

  • False positive or false negative results: Automated systems can lead to false positives or false negative cases. False positive means that a transaction is incorrectly labelled as malicious activity, while fraudulent activity is neglected in the case of false negative.
  • Lack of transparency: Certain AI algorithms can be difficult to interpret, making it challenging to understand why a particular transaction was labelled as potentially fraudulent. 
  • Biased algorithms: AI algorithms depend on training data which can be biased. If the training data contains biases, the algorithm may produce inaccurate results.

Explainable AI can help to partly overcome some of the risk factors. The term “explainable AI” refers to the development of AI systems that can explain their decision-making processes in a way that can be understood by the average person. For example, AI could be used to assess loan risk and explainable AI will not only assess the risk, but also explain the decision/outcome. 

The future of AI and financial fraud

The use of AI in the fight against financial fraud isn’t new. The fight, though, has got a whole lot tougher since the COVID-19 pandemic which was a driver of considerable change in the world of digital transactions and threat actors have taken note. Fraudulent activity is not only becoming more sophisticated but is increasing in scale and sheer numbers. It follows then, that the demand for effective AI solutions to help combat them is greater than ever. 

In a fast-paced digital world, it is clear that AI will be a key player in the future of financial fraud defence and attack. All but the fraudsters will of course be rooting for the defence side to prevail. There are already some indications that we will reach a point where suspected fraudulent behaviour is detected and prevented before it even occurs. That is the focus of AI’s next iteration - simulation modelling. That is also for another article, on another day!

For specialist legal advice about bringing or defending a civil financial fraud claim, please contact Cara White.

Find out more

If you are interested in how AI will change business and the law, visit and bookmark our Spotlight AI hub for more AI insights. The Hub brings together commentary from Ashfords’ experts, our clients and our contacts across a wide range of areas; looking at how AI might impact as its use evolves. 

Please do also get in touch if there are any specific areas related to AI that you’d be interested in hearing more about. 

Visit our AI spotlight area
AI Social Graphic

Sign up for legal insights

We produce a range of insights and publications to help keep our clients up-to-date with legal and sector developments.  

Sign up