How developments in AI are impacting claims fraud

  • Market Insight 26 January 2024 26 January 2024
  • UK & Europe

  • Regulatory risk

How can you tell when someone is lying? Is it their body language that gives them the away? Their eye movements; their gestures; the way in which they touch their face while speaking? Or is it the nature of the lie itself? Not enough detail or perhaps too much detail? An unwillingness to use the first-person pronoun ‘I’?

Truth is, unless you’re a skilled practitioner, catching someone out in a lie can be difficult. That’s something the insurance industry has learned the hard way. In 2022, the sector detected 72,600 fraudulent insurance claims, according to the Association of British Insurers. The actual number will be much higher; many go undetected.

Attempts have been made by insurers to root out these fraudulent claims. Voice stress analysis was much heralded in the early 2000s – technology developed in Israel that listened to a claimant’s voice during a phone call to see if it could detect abnormally high levels of stress. Cognitive interviewing was a psychological approach – skilled call handlers interviewed claimants using a gentle but persistent form of questioning developed for use with children who were the victims of sexual abuse. 

Now, with the rapid development of artificial intelligence (AI), a new approach is being pioneered. The key is the inconsistency of the liar. In short, telling a detailed lie about a car accident requires imagination, a good memory, the ability to act, a degree of confidence, intelligence and flexible thinking. Telling a lie once is easy; telling it over and over again is hard. Details will change. Terminology fluctuates. Aspects become over-embellished. It’s these changes and inconsistencies that AI can take advantage of.

Take, for example, a faked car accident in which a driver and three passengers are involved. All four people must provide statements, complete claims forms, complete medical forms, speak to physicians. The chances that all four alleged victims offer consistent statements and accounts over a period of several months is low. Human nature just isn’t like that. Our memories are not fool proof.

Those inconsistencies existed before AI; there’s nothing new about them. But what is new is AI’s ability to ingest large volumes of data, analyse it and then identify the inconsistencies. What would once have taken a claims fraud specialist perhaps an entire day to accomplish can now be achieved in minutes.

At Clyde & Co, we’ve been investing heavily in developing the capabilities of our Newton platform for several years. To identify fraud, the system must go through two key steps. 

First, it must be able to absorb and, perhaps most importantly, understand written materials – and this is where one of the big leaps has been made. Early on, the insurance sector focused on template-based optical character recognition (OCR). The system was taught that on form A, for example, the information contained in box B would always be a date or a name. 

That’s fine for a well-structured document but it can’t understand a set of free-form doctor’s notes. The newer approach we’ve adopted is holistic OCR – teaming the OCR with an AI in order to allow it to read and understand an entire page rather than simply the information in defined areas. This allows the AI to operate at a much higher level of comprehension.

Step two is the analysis of the data. We’ve integrated ChatGPT into our platform to do just that. Within seconds it can spot the inconsistencies and even draw a table to show them graphically.

The importance of these developments is not lost on an insurance industry facing a potential rise in fraudulent claims due to tougher economic conditions. Every historic indicator points to the link between economic hardship and a growing volume of fraudulent insurance claims. But there are longer term wins, too.

Clearly, the holy grail would be predictive AI – the ability to identify fraudsters or fraud hot spots of even a movement from one type of fraud to another. While an attractive idea, the ability to predict the future based on historical data and patterns of data is dubious. The reputational damage for getting such a prediction wrong would be very serious indeed.

A more realistic development is that, as insurers and law firms tap into the benefits of AI, so too will the fraudsters. If we use AI to check for inconsistencies, fraudsters may attempt to do the same, effectively checking their documentation before submitting it for scrutiny. We’ve seen a similar situation play out in the cyber security sector. Both attackers and defenders are now using AI. Attackers can quickly and easily build malicious code using AI tools and then unleash that virus as ransomware on the web. Fraudsters have access to free-to-use AI tools like ChatGPT and Google’s Bard plus we may well see criminals on the dark web offering AI services to fraudsters.

One area of concern is the growth of so-called deepfakes and shallowfakes – the manipulation of media through the use of AI. Faked news footage, faked audio of a politician or a phone call to the police, faked dashcam footage, faked photographs. These advanced techniques involve creating realistic but fake audio, video, or image content, which can be used to fabricate evidence in fraudulent claims. AI-driven solutions are being developed to identify these fakes by analysing inconsistencies in digital fingerprints, patterns, and other anomalies that are imperceptible to the human eye. It’s our hope that in the next five years, AI will evolve an increasingly sophisticated ability to detect and counter these deepfakes and shallowfakes. 

Another win that could prove highly effective for the insurance industry would be AI’s ability to scan and analyse social media. Currently, human investigators spend hours trawling through social media sites like Facebook looking at claimants’ behaviour pre- and post-‘accident’ and seeing who their connections are. These investigations regularly throw up valuable clues about insurance fraud. Teaching an AI system to do the same task would not only speed up the investigation process but also reduce costs and broaden the scope of searches.

We’re on the verge of an AI-driven revolution in fraud detection. For the insurance sector, this holds out the promise of massive financial savings and potentially the longer-term reduction in attempted frauds as criminals see less prosect of success. The challenge for insurers is not to be left behind as AI technology takes huge leaps forward. But also, as important will be to consider the ethical dimension of these advances. The move from identifying fraud to predicting fraud will be one of the biggest challenges for our industry in the next 10–20 years.

This article was originally published by Claims Magazine on 26 January 2024.

End

Stay up to date with Clyde & Co

Sign up to receive email updates straight to your inbox!