Table of Contents
Two stories landed in my feed the same week, and they confirm something I wrote about last September.
In one, the CEO of an AI startup posted that more than 20 deepfake candidates had hit his recruiting pipeline in a few weeks. Real-time face filters on the video calls. Voices that almost passed. Résumés tuned for a hiring funnel designed around the assumption that the candidate was a person.
In the other, the U.S. Department of Justice sentenced two American nationals for running the U.S. side of a North Korean IT worker scheme.
That's the same operation I analyzed in AI Gone Rogue when Anthropic published their threat intelligence report. Back then, the story was DPRK workers holding Fortune 500 engineering jobs by leaning on Claude for basic technical tasks, cultural references, and unscripted conversation. The DOJ release is the prosecution version of that story. Stolen and synthetic identities. Hundreds of American companies. Salaries funneled back to the DPRK. An industrial supply chain for fraudulent hires.
Different stories. Same root failure. The thing both rely on (and the thing your enterprise relies on every day to function) is identity. And identity is no longer doing the job we asked it to do.
The conversation is stuck on the wrong layer
Most of the deepfake discourse keeps grinding away at the artifact. Is the video clean? Does the audio match the lip sync? Can the algorithm spot a synthetic frame? Those are real questions. They're also the wrong layer.
The artifact is the surface. What's failing underneath is identity itself as a trust primitive in the enterprise.
I made the same point in September, in a section I called "The Identity Crisis." Then it was about Fortune 500 engineering hires propped up by Claude prompts. Now it's hiring funnels at AI startups and federal sentencings. The pattern keeps showing up because the underlying assumption it breaks is the same one.
Real prompt. Real DPRK IT worker. Captured in Anthropic's threat intelligence report. Someone in their Fortune 500 employer's Slack mentioned getting a muffin. The actor didn't know if "muffin" was engineering jargon, so they asked Claude. (They also asked how to tell if Go was installed. That one actually is jargon.) Industrialized identity fraud at federal-sentencing scale, propped up by snack-level confusion.
For decades, we operated on a quiet assumption that forging identity convincingly was expensive. A fake résumé is one thing. A fake résumé attached to a fake face on a live video call, plus a fake voice that handles unscripted follow-ups, plus a fake address that clears a background check, plus a fake bank routing number that survives onboarding, was a different cost structure. Attackers couldn't afford to run all of it at scale. Now they can.
The DOJ case is the proof at scale. When the economics flip, you don't get one fake candidate. You get an industrialized supply chain.
Hiring noticed first because it was the softest target
Hiring made the news because it's the most photogenic example, and because the verification surface in a typical recruiting funnel is genuinely thin. A résumé. A few interviews. A reference check. A background check. Most of those signals were already easy to fake before generative AI, and recruiters are trained to find good fits for roles. The skill set that finds talent is different from the skill set that catches adversaries.
Hiring is one symptom. The same pattern is showing up everywhere identity matters and verification is thin.
Finance teams are fielding voice-cloned approvals on wire transfers (some of them sound exactly like the CFO, because in a sense they are the CFO). Customer support is running sessions where the "customer" has all the right answers to the security questions, because attackers scraped the answers off LinkedIn. Internal collaboration tools host video calls where one participant is a face nobody has met in person, running through a real-time filter. And adversaries are now testing the entire customer onboarding stack (the KYC checks, the liveness selfies, the document scans) with synthetic documents and synthetic selfies they can generate at zero marginal cost.
Hiring caught the symptom. Identity is the disease.
And this isn't theoretical inside our own four walls. HR Brew featured our head of Global HR, Julia Frament, last July on exactly this shift in how talent acquisition has to operate in the deepfake era. When an email security company's people team is on the record in an HR trade about fraudulent candidates, the problem has clearly migrated out of security and into the rest of the business.
What changes when verification stops being a moment
The enterprise built its trust model around moments. The interview. The KYC check. The login. The badge swipe. Each one was a snapshot, treated as if it would stay true until the next snapshot. That worked when the cost of forging a moment was high.
It doesn't work anymore. We have to treat identity as a continuous signal. Something that drifts, gets spoofed, gets hijacked between checks. Something we re-verify across multiple channels and modalities for it to mean anything.
A few things follow from that.
Humans alone can't carry the load.
Look, we're decent at spotting bad fakes and bad at spotting good ones, and the fakes keep getting better. Whatever signal layer we build has to assume the human in the loop will sometimes be wrong, and design accordingly.
The defense has to span channels. Attackers are hitting identity in email, in voice, in video, in chat, in document workflows, and in hiring. Any defense that lives in one channel only loses the moment the attacker walks over to the next one.
The teams that solve this become critical infrastructure. Continuous, multi-signal identity verification across every channel where identity matters is the new ground floor for operating a business.
Where this leaves us
The Zania post and the DOJ release look like hiring stories. The bigger story is that the trust model the entire enterprise runs on has been degrading quietly for years, and it's now failing in public.
The next decade of security work is going to be about putting identity back together as a continuous, multi-channel, verifiable signal. (At IRONSCALES, the slice we work on covers where most enterprise identity attacks already land today, which is email, business communications, and the deepfakes that now travel through them.) Hiring teams are catching what's hitting them at the front door. The same problem is walking through every other door at the same time.
Identity is a continuous signal now. Read it that way, across every channel where it matters, or watch the trust model keep failing in public.
Explore More Articles
Say goodbye to Phishing, BEC, and QR code attacks. Our Adaptive AI automatically learns and evolves to keep your employees safe from email attacks.
