Blog

AI Personas, Deepfakes, and the Collapse of Trust (Part 2)

Written by Audian Paxson | Aug 07, 2025

This is my second in a three-part series unpacking OpenAI’s June 2025 threat intelligence report and what it signals for enterprise communication security.

When the Attacker Has a Resume

On page 4 of OpenAI’s latest threat report, there’s a case that stopped me mid-scroll. A campaign dubbed "IT Workers" involved North Korea-linked actors using AI to pose as freelance tech professionals and secure remote jobs at U.S. companies. They used:

  • chatbots to generate cover letters
  • deepfake videos for interviews
  • synthetic resumes with fake work history
  • cloned GitHub contributions
  • customized profile photos

Each of these elements alone might seem unremarkable. Together, they create a near-flawless synthetic applicant built to deceive at scale.

I mentioned this to a few people at RSA earlier this year while talking through our Deepfake Protection feature.

Every single one of them had a story. HR teams who flagged applicants with resumes that were a little too polished. Candidates who interviewed over video with a slight lag between lip movement and voice. Something felt off, but nothing tripped traditional verification systems.

This isn’t phishing. It’s infiltration. One job application at a time.

Personas Built for Deception

One thing OpenAI’s report makes painfully clear is that synthetic identities are no longer throwaway assets. They’re fully developed personas designed for long-term use. In another campaign, Uncle Spam (pages 32–35), actors created entire fake organizations, including logos, domain infrastructure, and Bluesky profiles with AI-generated avatars and bios. These weren’t sloppy fakes. They were plausible, even professional.

Another campaign worth revisiting is VAGue Focus (pages 17–21), which we touched on in Part 1. It involved fake journalists reaching out to experts for comment, planting misleading narratives via blog posts and social media engagement.

The outreach read as fluent, friendly, and totally fabricated, and it still serves as one of the clearest examples of persona-based trust exploitation.

Each of these personas had a purpose. They weren’t just trying to look real, they were designed to earn trust.

How Synthetic Personas Bypass Traditional Defenses

We’re wired to trust certain signals: a face, a name, a professional affiliation. We recognize logos, job titles, even the rhythm of a business email. But AI makes all of that easy to fake.

The result? Most of the old cues we rely on to determine legitimacy (the ones training programs and security awareness campaigns still emphasize) just aren’t holding up.

These attackers aren’t sending sketchy emails from misspelled domains. They’re showing up to the video call looking like a real candidate. They’re reaching out as a journalist from a publication that doesn’t exist yet, but looks like it should.

And because the campaign is well-researched, the conversation feels relevant. Timely. Targeted. Trustworthy.

Where Deepfake Protection Fits

Let’s be clear: this isn’t about catching celebrities in fake videos. It’s about preventing threat actors from assuming realistic identities inside business ecosystems. Whether it’s a job applicant, a potential vendor, or a partner outreach, Deepfake Protection adds an automated layer of scrutiny:

  • Visual anomaly detection that flags manipulated or AI-generated facial footage.

  • Identity linkage analysis to assess whether the face in the video matches public profiles, email domains, or device signals.

  • Behavioral context that alerts if a new identity is acting out of pattern, or if multiple personas are converging on the same org.

This isn’t a nice-to-have. It’s the front line for stopping AI-powered impersonation.

From The Field

This chapter of the report doesn’t feel hypothetical. It feels operational. And it signals that we’re no longer dealing with fake messages. We’re dealing with fake people.

The playbook for social engineering has shifted. Now it starts with a convincing LinkedIn profile and ends with access to internal systems. And the controls we’ve relied on, from email authentication to soft skills training, aren’t enough to spot a synthetic persona built for deception.

*****

Up next: Part 3 explores how these personas are being used in coordinated campaigns across platforms to sustain long-term attacks.