AI-generated car damage is turning into a real insurance fraud issue, with Admiral linking a sharp rise in cases during 2025 to manipulated images and fabricated supporting materials. The problem is no longer limited to suspicious paperwork. Photos of damaged vehicles can now be edited to make a loss look worse or to help support a duplicate filing.
According to a BBC report, one filing used an AI-edited number plate on a damaged Land Rover, while a similar image with a different plate appeared in a second case.
Another image made rear-end damage look more severe than it was. Admiral said those submissions were caught by its fraud team and denied before any payout was made.
Admiral also said fraud rose 71% in 2025 from the previous year, and tied part of that increase to easier access to AI tools that can alter images and create documents that never existed. That gives this trend a clear consumer angle, because the cost of fraud does not stay with the fraudster alone.
How the fake evidence works
Instead of relying only on forged forms or invented stories, scammers can now submit a convincing image as supposed proof. In the examples provided, AI was used to change vehicle photos in ways that could help exaggerate damage or recycle the same incident into another filing.
That changes the burden on claims teams. They are no longer just checking paperwork and timelines, they are also testing whether the image itself can be trusted. Admiral said its fraud tools are improving, and the wider industry is sharing tactics as this type of abuse becomes harder to ignore.
Why premiums are part of this
Fraud adds costs across the system, and insurers say those costs can feed into higher premiums more broadly.
That’s what makes AI image fraud more than a niche crime story. Even drivers with legitimate claims could feel the effects through higher prices and more scrutiny during the review process.

Some cases involve opportunistic attempts to inflate a real loss, while others involve fake documents and other made-up materials built to support a false claim from the start. AI makes both paths easier to scale.
What happens next
The immediate response is better detection, but the stakes for customers are also clear.
Admiral said invented or exaggerated proof can lead to a denied claim, a canceled policy, and in more serious cases, criminal prosecution. As AI-made vehicle evidence spreads, closer inspection of crash photos is likely to become a normal part of claims screening.
While Google has taken steps to make sure AI image generation is watermarked, it’s not an industry-wide practice.





