According to the Association of British Insurers (ABI), detected insurance fraud costs insurers £1 billion annually. However, the real number is likely higher, as a lot of fraud still goes undetected.
I recently took a deep dive into how we at Confused.com work to identify and prevent fraud. But, in this article, I want to look specifically at artificial intelligence (AI) and the threats it poses to the insurance industry.
AI uses large sets of data to perform tasks we previously thought only humans could perform. That includes data analytics, writing text, creating realistic images, and speeding up repetitive tasks.
In this guide, we'll look at how fraudsters are using AI to make their crimes more sophisticated - and how we're using it to tackle fraud ourselves.
How is AI changing the way fraudsters work?
The most common kind of fraud we tackle at Confused.com is organised application fraud, also known as ghost broking.
This involves fraudsters selling fake insurance policies to their 'customers'. Typically, they advertise via social media to vulnerable people. For example, people whose first language isn't English, older people, or those who struggle to afford insurance.
Fraudsters then purchase from a legitimate insurer on their behalf, often with stolen credit cards and IDs. This way, the credit card pays for the policy, and the fraudster pockets the cash from the 'policyholder'.
We've seen fraudsters use AI and machine learning at multiple stages throughout this scam in 3 main ways:
1. Fraudsters are using AI to trick consumers and insurers into thinking they're legitimate
Fraudsters use AI and related technologies to make their advertisements more believable, including both visuals and copy.
For instance, ChatGPT - perhaps the most well-known generative AI tool - can quickly produce legitimate-looking text for fake websites. Similarly, the tool known as DALL-E can produce convincing images from text prompts.
Fraudsters use AI tools like these to set up multiple fake ads and websites quickly. For example, we've recently seen a fraudulent website modelled on a legitimate brand that looks convincing to the untrained eye. There were a few signs that told us the site was fake, including the lack of an FCA registration number at the bottom of the website.
In another recent case, a fraudster shows the 'customer' a video where they input the customer's details into a PCW and then displayed corresponding quotes. This video looked legitimate, but the fraudster had smoothly swapped out the real quote for one that showed a premium for an entirely different customer profile.
Video editing tools that make this possible are already widespread, but AI is making this even more accessible.
At the same time, other experts have expressed their worries about the rise of "deep fakes", namely AI-generated images that look real but are completely fraudulent. As well as ghost brokers making more convincing advertisements, fraudsters could even make fake images of damage from car accidents to support fraudulent claims.
2. Fraudsters are using AI to access and use more customer data more quickly
10 years ago, fraudsters would input fake information into each application manually, limiting the number of applications they could submit.
But by using AI, fraudsters automate the submission of these applications. They can use tools to pull customer data from where it's stored in spreadsheets to autofill the form.
This enables fraudsters to increase the scale of their operations, meaning there's a lot more fraud for us to tackle.
At the same time, fraudsters can access and submit a broader range of fraudulent information. They're not just using the same email or physical address in each application.
Instead, they're accessing more stolen customer data from a wider range of victims. This means they don't have to use the same address or other data point repeatedly. This information often comes from more sophisticated phishing scams, often using ChatGPT to deceive victims into providing their data.
In other cases, fraudsters are generating the data information themselves. For instance, some fraudsters are generating 'synthetic' IDs from scratch - these are sets of data that look authentic but don't actually correspond to real people.
3. Fraudsters are using technology to bypass security triggers
The varied data that fraudsters use is a huge challenge to the security systems of PCWs and insurance companies. Typically, these systems look for anomalies in data. So, if the data is more varied, it often looks more legitimate.
Fraudsters use AI to make data look more sophisticated. For example when the fraudster is making a fraudulent application, they can use AI to mask the device. This means it's not easy to trace the fraudulent applications.
IP addresses are one data point we monitor to assess whether an application is legitimate. A few applications that use the same IP address are not usually suspicious, as they could be from flatmates or family members.
However, we need to investigate if thousands of applications are from the same IP address. Of course, this address could be a coffee shop or public wifi spot, but it could also be a fraudster making many applications from the same place.
Now, though, fraudsters are using digital tools to manipulate their IP addresses and their device IDs. They can use tools that quickly switch between IP addresses or change their device ID. This means that 2 applications can look like they were submitted on 2 different devices in 2 different places, even if they were from the same fraudster.
We have ways around this, too. For instance, we can track other signals - such as screen resolutions or device languages - to understand what devices the user is using. Detecting the same odd resolutions appearing over and over is a red flag.
Ultimately, AI is moving quickly, making fraud more sophisticated. For the insurance industry to tackle it, we need to work together.
How might the insurance industry use AI to help prevent fraud?
Fortunately, AI is not just a tool used by fraudsters. Insurers and PCWs also use AI algorithms and advanced predictive analytics to fight back.
Whenever there is data, there are usually trends and anomalies - even if that data is gathered or generated by AI. We've been using analytics to identify trends in application data for years, and AI only makes those analytics even more sophisticated and accurate.
When looking for fraudulent applications, it can help to start by understanding the applications that are legitimate. Insurers can first map what legitimate customers look like and use that to understand who might be fraudulent.
For instance, we once saw a high frequency of harbourmasters applying for insurance. We would consider this suspicious activity if they're all from the Midlands, which is nowhere near the sea.
Insurers can also map new data to previous interactions they've had with customers. If a returning customer is applying with data that's different to prior applications, they could be a victim of identity theft. For example, if a customer used to be a taxi driver doing 20,000 miles a year and now they're a surgeon, it may be suspicious.
AI and advanced analytics could make this much easier for insurers to identify and flag.
The challenge for the industry is making sure we can stop the fraudsters completely rather than pass them on to try their luck with the next insurer. To prevent this, companies must ensure they have similarly robust AI anti-fraud detection systems to tackle insurance claims fraud, phone applications, or mid-term adjustments.
How else can insurers and PCWs tackle AI insurance fraud?
Alongside using AI ourselves, the insurance industry has other ways to fight AI-powered insurance fraud. Here are 3 ways we can prevent AI insurance fraud:
1. Insurers can manually double-check any suspicious data
Insurers have a deep relationship with customers that they can leverage to tackle fraud.
Unlike PCWs, insurers can ask the customer to provide additional information if they suspect a fraudulent application. For instance, they could request a form of ID and proof of address through a utility bill.
This is currently manual for most providers. But with the aid of AI, this process could become automated within the insurers' application journey. If the insurer's system detects an anomaly, it could automatically trigger the rest for extra documentation.
The risk here is that such requests can add friction to the customer experience. To get around this, insurers and PCWs can use third-party identity verification to keep a frictionless claims process for genuine customers. For instance, if the customer agrees, they can access credit checks or previous insurance to access a more complete picture of the customer.
2. The insurance industry can communicate better
Thankfully, fraudsters aren't brand loyal. Instead, they regularly move between insurers and PCWs. If they're blocked by one, they simply go elsewhere.
That means insurers can't prevent fraud by acting alone. We need to work as an industry to share information about how fraudsters operate, what they're attacking, and what techniques they use.
As insurers and PCWs, we can tell each other about the fraud we're experiencing. Others may have experienced it before and might know how to prevent it.
Ultimately, everyone in the insurance industry can benefit from participating in insurance forums - even our competitors. These are arenas where industry stakeholders come together to discuss the threat of fraud, our experiences with it, and how to tackle it. They're important spaces to discuss how we can respond.
The insurance industry can also work on better communicating the risks of AI fraud to their customers. Part of this can involve publishing content explaining the types of fraud out there and what customers can do to protect themselves. Part of this will be to raise awareness of the specific risks of AI, such as spotting those ads or websites that look very legitimate.
3. The industry can stay on top of future tech trends
AI has moved quickly in recent years, and there's no doubt that it's going to continue to transform in years to come.
To protect ourselves and our customers against fraudulent activity, it's important that we stay alert to any changes in the technology. That, of course, means detecting any new ways that fraudsters use AI to attack our insurers - for instance, in different parts of the customer journey.
But it also means deploying the newest AI technology to protect ourselves against fraud in the future too. As part of this, insurers may need to invest more in AI and training for staff on how to use these tools. With so much money still lost to fraud each year, it's an investment worth making.
What is Confused.com doing to tackle AI insurance fraud?
We're already doing a lot to tackle fraud by leading cooperation in the industry and keeping our customers informed.
We lead industry fraud forums
I host quarterly forums to discuss fraud, which involve 40 to 50 companies from across the industry, including our competitors. It's a valuable space to discuss the challenges and threats we face and the methods and technologies we use to respond.
Typically, the forums start with my update on the threats we're currently seeing. Then, the discussion is open to everyone to make their own comments, share their experiences, and suggest solutions.
If you want to get involved, contact us at fraud@confused.com.
We stay up-to-date on technology and trends for the industry and customers
The only way we can protect ourselves, insurers, and consumers is to stay on top of what's happening across the industry. This could be new fraud threats or new ways to tackle them.
We share this information with the rest of the industry. For instance, I send out regular bulletins to insurers and other PCWs with the latest statistics and information we have. This way, everyone can be prepared to face the threat of fraud.
We also regularly refresh our fraud-related content for customers. One of the first tasks I completed when I arrived at Confused.com was to update our fraud content. As fraud is constantly changing, customers need to keep up in order to stay protected.
Find out more about our customer-facing fraud information.
Fraud is changing, but we can keep up
Fraudsters are always looking for new victims, strategies, and opportunities to make money. It's our job as an industry to protect ourselves, each other, and our customers.
At Confused.com, our fraud forums help insurers and PCWs combat fraud together. And as fraud changes with the progress of AI, we need to adapt, too.
Keep up with the challenges that the insurance industry is facing at Confused.com.