Blog

Why We Commissioned the Osterman Trust Research (And What We Found)

Written by Audian Paxson | Feb 06, 2026

When we started working with Osterman Research last year on this trust study, I had one clear goal...I wanted the freedom to ask the specific questions that would validate (or challenge) what we'd been seeing in our threat data and what I'd been writing about in our Phishing 3.0 thought leadership.

If you've read my Phishing Renaissance piece from October, you know I've been tracking how AI hasn't created new attack types. Nope, it's resurrected the classics and made them perfect.

So I wanted to know if that narrative held up under third-party scrutiny. I wanted quantifiable data that either confirmed we were onto something or forced us to dig deeper into why our observations didn't match the broader market.

Three months after fielding the research, the data is in.

Spoiler, the threat curve has absolutely reset.

What I Was Curious About

Going into this research, I had three specific areas of curiosity:

1. Are organizations actually experiencing the trust crisis we keep talking about?

We see it in our customer data daily, attacks that weaponize trust, spoofing executives, vendors, colleagues. But is this a widespread phenomenon or just noise in our specific customer base?

2. Is training actually working for emerging threats like deepfakes?

Everyone talks about "security awareness" as the solution. But when attacks look perfect, sound perfect, and arrive through trusted channels, can humans really be expected to catch them?

3. Are organizations ready to make drastic changes?

Because if they're not, then all this research is just academic. We need to know if security leaders are prepared to replace legacy tools, add new defenses, and actually act on the intelligence they're getting.

Osterman surveyed 128 IT and security decision-makers at organizations with 1,000-5,000 employees. Not Fortune 500 enterprises with unlimited budgets. Not tiny startups still figuring out email authentication. The mid-market, companies big enough to be targets, small enough to have resource constraints.

The Three Findings That Stood Out

1. Almost Everyone Has Been Hit (And Many Don't Know It Yet)

87.5% of orgs experienced at least one trust-undermining incident in the past 12 months.

Let me put that another way...if you're a security leader reading this and you think your organization hasn't been compromised, there's an 87.5% chance you're wrong.

We're not talking about generic phishing emails that get caught by spam filters. We're talking about attacks that specifically undermine trust, attacks like vendor impersonations, executive spoofing, colleague BEC, deepfake attempts.

Also curious to me... 82% also reported heightened threat actor interest in exploiting trusted communications over the past year, with 51.6% saying that interest reached the "highest" level.

So not only have most organizations been hit, but attackers are actively ramping up their focus on these trust-based attacks.

2. The Finance Team Paradox

Here's the finance team problem in one stat: 59.4% of organizations rated finance employees as "high" or "extreme" priority targets.


That makes perfect sense to me. Finance teams have direct access to payment systems, vendor accounts, wire transfer capabilities. If you're an attacker trying to steal money, you go after finance.

And BTW... those same organizations reported the lowest confidence in finance teams' ability to detect sophisticated impersonation and BEC scams.

So, yeah...the people attackers target most are the ones security leaders trust least to defend themselves.

And it's not because finance teams are incompetent. It's because the attacks targeting them are getting incredibly sophisticated. AI-generated BEC emails that match company communication patterns perfectly. Vendor impersonation scams that include correct invoice numbers, project details, and payment terms. Deepfake voice calls from "the controller" authorizing emergency wire transfers.

Finance teams are dealing with high-volume, time-sensitive communications where speed matters and scrutiny gets sacrificed. Attackers know this. And they exploit it daily.

3. Organizations Are Ready to Take Drastic Action (Like, Now)

Another finding that kinda contradicts what we usually hear in sales conversations. Security leaders love to say "we're looking at options" or "we're evaluating next year's budget."

But when Osterman asked directly about willingness to make changes...

  • 70.4% said they're willing to replace their entire security stack
  • 76.6% are willing to add best-in-class point solutions
  • 70.3% are willing to replace core security platforms like SEGs

Those aren't "maybe someday" numbers, those are "we're ready to move" numbers.

The reasoning makes sense. 55% of security leaders say failing to defend against these attacks significantly increases data breach likelihood, the highest "extreme" rating of all potential impacts measured in the study.

They understand what's at stake.

The Findings That Didn't Make the Headlines (But Should)

While everyone's going to focus on the 88% incident rate and the 70% willing to replace stacks, here are a few nuggets from the data that caught my attention:

They said Deepfakes are "immature and emergent" - and that's terrifying.

57.8% of respondents said current deepfake threats are "immature and emergent." Now think about that. The attacks we're seeing today, the ones already costing organizations an average of $280,000 per incident, are the immature versions.

Osterman puts it bluntly in the report: "When 2030 arrives, it's likely we'll look back and comment on how significantly attacks have changed since this research was conducted."

We're at the beginning of this threat curve, not the end.

Traditional phishing detection is the ONLY defensive technology declining in importance.

 

Every other security technology measured in the study (AI-generated attack detection, deepfake detection, behavioral anomaly detection, account takeover prevention) is increasing in importance over the next 12 months.

But traditional phishing detection? Declining.

Not because phishing attacks are disappearing. They’re evolving. The legacy tells, like typos, clumsy links, generic salutations stop working when AI can generate flawless, on-brand emails that look and read exactly like the real thing.

Training is "particularly ineffective" for deepfakes.

I'm not surprised, but another insight in the research validates something we've been saying for months...you cannot expect humans to spot perfect synthetic media without technological support.

60% of respondents lack confidence in their ability to defend against deepfakes despite current training. And Osterman's assessment is harsh - training approaches for deepfake audio and video are "particularly ineffective."

Why? Because the training assumes people can detect anomalies. But when the synthetic voice sounds exactly like your CEO, when the video shows your colleague's actual face saying actual words, when every visual and auditory cue screams "this is real," training fails.

What This Means for How We Think About Email Security

I first started fighting phishing 20 years ago and during that time, I've seen the evolution from basic spam filters to SEGs to solutions that plug in via API and use AI detection. And I can tell you that this Osterman research marks an inflection point.

The assumption that email security is about catching "bad emails" is dead.

The new reality is that email security business communications is about validating trust.

Is this really from my CFO? Is this actually my vendor? Is this legitimately a colleague asking for credentials? The indicators we've relied on for 20 years don't answer those questions anymore when AI can perfectly mimic trusted senders.

That's why Osterman found SEGs explicitly failing. On page 15 of the report, they don't mince words:

"Legacy email protections won't help organizations defend against AI-powered phishing attacks. Secure email gateways (SEGs) and email security solutions designed to look for malicious links, weaponized attachments, and account impersonations are too blunt an instrument to recognize the subtle indicators of modern and still emerging AI-powered attacks."

Osterman Research, Rebuilding Trust in Digital Communications, Page 15

"Too blunt an instrument."

That's third-party validation for what we've been saying: The tools designed for Phishing 1.0 don't work for Phishing 3.0.

Look, I know this blog is essentially me promoting our company's commissioned research. I'm not pretending otherwise.

But here's why I wanted to write it this way, in first person, telling you the backstory:

This research matters beyond IRONSCALES.

Whether you're a current customer, a competitor, or a security leader evaluating options, the data in this report should inform your decisions. The 88% incident rate isn't an IRONSCALES problem, it's an industry problem. The finance team vulnerability isn't our customer data, it's Osterman's independent survey of 128 organizations.

If you read this report and decide IRONSCALES isn't the right solution for you, fine. But please don't read it and decide the status quo is acceptable.

Because the threat curve has reset. Attacks we thought were "solved" are immature and dangerous again. Legacy defenses are explicitly failing. And organizations are ready to take drastic action to get ahead of what threat actors are just beginning to deploy.

The window to act is now. Before these "immature" attacks mature further. Before the finance team gap gets exploited one more time.

Download the full Osterman report here - no form fill required, because this information is too important to gate.

And if you want to talk about what this means for your organization specifically, let's talk.