Blog

2025 Report - Deepfakes and the Confidence Problem

Written by Audian Paxson | Oct 09, 2025

We just wrapped our second annual deepfake threat report, and one number stood out:
99% of security leaders say they’re confident in their deepfake defenses.

On paper, that sounds like good news. But in practice, it doesn’t line up with what we’re seeing in the report. In simulated detection exercises, only 8.4% of organizations scored above 80%. The average score was 44%.

So, there’s a noticeable gap between confidence and capability, and that gap carries real risk.

Losses Are Already Happening

Over half of the organizations we surveyed reported financial losses tied to deepfake or AI voice fraud in the past year. The average loss was over $280,000 per incident. Nearly 20% reported losses of $500,000 or more. Some exceeded $1 million.

These incidents often involved synthetic voices or video impersonation, credible enough to trigger a transfer or expose credentials.

Despite increased awareness, 88% of organizations now offer some kind of deepfake-related training (the effectiveness of that training is mixed). The detection rates suggest that most employees still aren’t prepared to recognize or respond to realistic impersonation attempts.

As synthetic media becomes more convincing, detection is likely going to get harder—not easier.

Why This Needs Attention

It’s understandable that teams want to feel prepared. But there’s a risk in mistaking visibility for readiness. Most organizations are talking about deepfakes, training around them, and assuming that box is checked.

The data doesn’t support that assumption. If simulations are any indicator, most users won’t spot a convincing AI-generated voice or video when it shows up, especially if it happens in the day-to-day motions and comes through a trusted channel.

This isn’t about panic. It’s about clarity. If your detection rates are hovering in the 40s, and you’re reporting confidence in the 90s, something’s off. And in security, that kind of mismatch often leads to exposure.

What Else the Data Shows

The full report covers much more than confidence levels.

We tracked deepfake incidents across industries, looked at which communication channels are being targeted, and analyzed where organizations are planning to invest in 2026.

A few takeaways:

  • Attack vectors are expanding beyond voice into video, chat, and even meeting-based impersonation.
  • Email and static images are currently the most common vectors (59.3% each), while other modalities are accelerating.
  • Investment in detection tools is growing but still lags (far) behind the level of concern most teams report.

If there’s a single point worth emphasizing, it’s that confidence without evidence creates risk. And in the case of deepfakes, that risk is already materializing.

Want to see the full report?
Download the IRONSCALES Fall 2025 Threat Report for all the details.

Interested protecting your org from deepfake attacks?
Schedule a demo to see how we can help.