Blog

OpenAI's New Threat Report, What it Means for Businesses

Written by Audian Paxson | Jul 15, 2025

This is my first in a three-part series analyzing OpenAI's June 2025 threat intelligence report and its implications for enterprise security.

Part 1

I’ve been digging into OpenAI’s latest report, "Disrupting malicious uses of AI: June 2025," and what’s keeping me up isn’t just what they found, it’s what it signals for how business communication is being targeted, reshaped, and abused.

The report is really good, solid stuff. It documents 10 major operations OpenAI disrupted, and the social engineering tactics are frankly impressive. This isn’t about isolated tactics. It’s about AI becoming the infrastructure for deception.

AI as Social Engineering Infrastructure

Here's what changed, and why I think many of us are missing the point. Traditional social engineering required human effort at every step:

  1. Research targets
  2. Craft messages
  3. Maintain (and juggle) personas
  4. Respond to replies

Threat actors could scale through volume, but quality always suffered (you know...the “we’ll make it up in volume” strategy).

AI flipped that equation. Now they can scale quality.

Operation "VAGue Focus" (detailed on pages 17-22 of the report) shows how sophisticated this has become. Chinese-linked threat actors created three fake European media companies—Focus Lens News, BrightWave Media Europe, and Visionary Advisory Group. They used ChatGPT to generate professional correspondence, social media profiles, even detailed pitches offering $2,000 per hour for "intelligence interviews."

The part that got my attention? They were simultaneously polishing correspondence to U.S. Senators while also generating cold outreach to researchers. All from the same AI-powered operation.

Cold outreach from an X account associated with this operation,
publicly messaging a researcher on X.

 

The Employment Attack Vector

The IT Workers scheme (pages 4–7) is something I actually heard about from three different customers at our booth during RSA this year. We were talking about our new Deepfake Protection, and each one of them mentioned job applicants using AI-generated videos.

The scheme OpenAI documents involves threat actors (likely North Korean, according to the report) using AI to automate the entire job application process:

  • Generated credible resumes aligned to specific job descriptions
  • Answered real-time interview questions
  • Created consistent work histories and educational backgrounds
  • Researched remote work setups to evade corporate security (this part was trippy)

Nah…this isn’t phishing in the traditional sense. It’s AI-powered infiltration, hiding behind job applications, not spoofed domains.

What really bothers/impresses me about this operation: they used contractors in Africa and North America to operate hardware remotely, making the deception nearly undetectable during video interviews. Your vetting process assumes the person on camera is the person you're hiring. I'm not sure that assumption holds anymore. Check out the LM ATT&CK framework used in the IT Workers scheme to get what I'm talking about.

Activity LM ATT&CK Framework
Automating to systematically fabricate detailed résumés aligned to various tech job descriptions, personas, and industry norms. Threat actors automated generation of consistent work histories, educational backgrounds, and references via looping scripts. LLM Supported Social Engineering
Threat actors utilized the model to answer employment-related, likely application questions, coding assignments, and real-time interview questions, based on particular uploaded resumes. LLM Supported Social Engineering
Threat actors sought guidance for remotely configuring corporate-issued laptops to appear as though domestically located, including advice on geolocation masking and endpoint security evasion methods. LLM-Enhanced Anomaly Detection Evasion
LLM assisted coding of tools to move the mouse automatically, or keep a computer awake remotely, possibly to assist in remote working infrastructure set ups. LLM Aided Development

 

Multi-Platform Coordination

These operations share something that concerns me more than the individual tactics: multi-platform coordination. Email is just one touchpoint now. The VAGue Focus operation used email for formal correspondence while leveraging social media for relationship building.

It breaks our mental model of email as a standalone threat. The attackers behind VAGue Focus didn’t treat email as a channel, they treated it as a node in a broader relationship campaign. They blended social media, outreach, and fake interviews into a seamless operation.

I keep coming back to this because it breaks our mental model. We've been securing email like it's an island, but these attackers are treating it as part of an ecosystem. They're building relationships on LinkedIn, sending professional emails, and conducting "research interviews" all as part of the same operation.

Screenshot of VAG Group’s Contact Us page where the Chinese characters for
‘contact us’ (聯絡) are visible in the drop-down menu.

The Long Game Is Here

What I find most unsettling about the ScopeCreep operation (page 25) isn't the malware, it's the operational discipline. These attackers used temporary email addresses for each ChatGPT conversation, abandoning accounts after single interactions to avoid detection patterns.

They're not looking for quick wins.

They’re not going for smash-and-grab attacks.

They’re building persistent access. Slow, quiet, hard to detect.

The employment scheme shows attackers willing to invest months in a single target (complete with performance reviews and team meetings).

Why I Think We're Fighting the Last War

I've been in this space long enough to recognize when we're one step behind. The industry designed email security to stop bad messages. But when attackers can generate good messages that reference your actual business context, that entire defensive model collapses.

The $2,000/hour intelligence interview offers in VAGue Focus? Those weren't random. They were researching specific targets, understanding their expertise, and crafting offers that would be genuinely tempting to researchers and policy experts.

Your phishing simulations probably don't include personally researched, contextually relevant offers from fake but professionally presented organizations. Mine don't either, and that's a problem.

Tweet generated by this threat actor using ChatGPT and posted
from an X account that consistently posted this actor's content.

My Take

I think we're witnessing the industrialization of trust exploitation. It's not just about better phishing emails, it’s about business communications. It’s about hacking the human. It's about creating entire fake business ecosystems designed to fool your most security-aware employees.

The technical capabilities are impressive, but what keeps me up is the strategic thinking (I know they are bad guys, but I can still appreciate their creativity).

These operations show patience, research, and genuine understanding of how business relationships develop.

They're not trying to trick people into clicking links. They're trying to become trusted business partners.

*****

Coming next: How AI creates entire fake business personas and what this means for identity verification in enterprise communications.