Table of Contents
When a North Korean scammer asks Claude "what is a muffin?" it seems innocent enough. When that same person uses AI to maintain a Fortune 500 engineering job while funding weapons programs, we're looking at a fundamental shift in cybercrime.
After writing a three-part series on OpenAI's threat intelligence findings, I thought I understood where AI-powered cybercrime was heading. Then Anthropic dropped their Threat Intelligence Report: August, and it became clear that we're not just watching the evolution of cybercrime, we're witnessing its complete transformation.
OpenAI showed us how skilled criminals are using AI to work faster, Anthropic shows us something fundamentally different…people with zero baseline capability becoming sophisticated threat actors purely through AI dependency.
The Dependency Revolution
To me, the most unsettling finding in Anthropic's report isn't about any single attack, it's about what these actors can't do. Take the North Korean IT workers maintaining Fortune 500 employment while needing AI assistance for basic technical tasks, cultural references, and professional communication. Anthropic's threat intelligence team illustrated the scope of this dependency with examples of the types of basic technical and cultural assistance these actors required.
As the Anthropic investigators noted: "You don't need to know English. You don't need to know the cultural context of the United States, and you don't need any technical skills because Claude can help you through each of those barriers."
Or the ransomware-as-a-service operator selling $400-1200 malware packages despite being "unable to implement complex technical components or troubleshoot issues without AI assistance."
This represents a fundamental shift from the AI-assisted operations OpenAI documented. Those actors had skills and used AI to scale them. Anthropic's actors don't have skills (like, none) they have AI instead.
The numbers tell the story about the North Korean workers' Claude usage:
- 61% on frontend development
- 26% on general programming tasks
- 10% on interview preparation
These aren't skilled developers using AI for efficiency. These are people who literally cannot perform their jobs without constant AI assistance, yet they're successfully infiltrating major technology companies.
From Tool to Co-Conspirator
What Anthropic calls "vibe hacking" represents something I didn't see in the OpenAI analysis. This goes beyond using AI as a technical consultant, it's AI making strategic and tactical decisions throughout attack operations.
For instance, look at the case study that involves a cybercriminal (tracked as GTG-2002) who used Claude Code not just for reconnaissance or credential harvesting, but for analyzing exfiltrated financial data to determine "appropriate" ransom amounts and generating psychologically targeted extortion demands. Claude wasn't just helping with the technical execution, it was actively participating in strategic decision-making about how to maximize victim impact.
Take a look at this simulated custom ransom note created by Anthropic's threat intelligence team for research and demonstration purposes after their analysis of extracted files from the real operation.
To: [COMPANY] Executive Team Attention: [Listed executives by name] We have gained complete compromise of your corporate infrastructure and extracted proprietary information. FOLLOWING A PRELIMINARY ANALYSIS, WHAT WE HAVE: FINANCIAL SYSTEMS [Banking authentication details] [Historical transaction records] [Wire transfer capabilities] [Multi-year financial documentation] GOVERNMENT CONTRACTS ([EMPHASIZED AS CRITICAL]) [Specific defense contract numbers] [Technical specifications for weapons systems] [Export-controlled documentation] [Manufacturing processes] [Contract pricing and specifications] PERSONNEL RECORDS [Tax identification numbers for employees] [Compensation databases] [Residential information] [Retirement account details] [Tax filings] INTELLECTUAL PROPERTY [Hundreds of GB of technical data] [Accounting system with full history] [Quality control records with failure rates] [Email archives spanning years] [Regulatory inspection findings] CONSEQUENCES OF NON-PAYMENT: We are prepared to disclose all information to the following: GOVERNMENT AGENCIES [Export control agencies] [Defense oversight bodies] [Tax authorities] [State regulatory agencies] [Safety compliance organizations] COMPETITORS AND PARTNERS: [Key commercial customers] [Industry competitors] [Foreign manufacturers] MEDIA: [Regional newspapers] [National media outlets] [Industry publications] LEGAL CONSEQUENCES: [Export violation citations] [Data breach statute violations] [International privacy law breaches] [Tax code violations] DAMAGE ASSESSMENT: [Defense contract cancellation] [Regulatory penalties in millions] [Civil litigation from employees] [Industry reputation destruction] [Business closure] OUR DEMAND: [Cryptocurrency demand in six figures] [Framed as fraction of potential losses] Upon payment: [Data destruction commitment] [No public disclosure] [Deletion verification] [Confidentiality maintained] [Continued operations] [Security assessment provided] Upon non-payment: [Timed escalation schedule] [Regulatory notifications] [Personal data exposure] [Competitor distribution] [Financial fraud execution] IMPORANT: [Comprehensive access claimed] [Understanding of contract importance] [License revocation consequences] [Non-negotiable demand] PROOF: [File inventory provided] [Sample file delivery offered] DEADLINE: [Hours specified] Do not test us. We came prepared.
This criminal provided Claude Code with operational preferences through a CLAUDE.md file, essentially creating a persistent attack methodology that the AI could reference and improve upon. The actor compromised at least 17 organizations across government, healthcare, emergency services, and religious institutions in a single month. Not through superior skill, but through AI-powered systematic exploitation.
The Gig Economy Comes to Ransomware
The UK-based ransomware operator (GTG-5004) perfectly illustrates what I now call the "ransomware gig economy."
This actor developed and sold ransomware featuring ChaCha20 encryption, anti-EDR techniques, and Windows internals exploitation, all while being completely dependent on Claude for implementation.
Their commercial operation included:
- Basic ransomware packages ($400 USD)
- Full RaaS kits with command and control tools ($800 USD)
- Advanced crypters for native binaries ($1,200 USD)
The technical analysis reveals capable malware with sophisticated features like RecycledGate syscall invocation and FreshyCalls techniques. Yet the actor couldn't implement any of it without AI assistance. They're essentially selling AI-generated capabilities as their own expertise.
Yeah…this demolishes traditional assumptions about the relationship between actor sophistication and attack complexity. When AI can provide instant expertise, the barrier to entry for sophisticated cybercrime operations basically vanishes.
Beyond Individual Operations…
What makes Anthropic's findings more concerning than OpenAI's is the systematic nature of the dependency. The Chinese threat actor they tracked integrated Claude across 12 of 14 MITRE ATT&CK tactics over a 9-month campaign targeting Vietnamese critical infrastructure. I’m not talking about occasional assistance, I’m talking about comprehensive operational support spanning reconnaissance, exploitation, lateral movement, and strategic planning.
The fraud ecosystem cases show even more complete integration. Criminal actors are using AI for victim profiling through stealer log analysis, creating sophisticated carding platforms, powering romance scam operations, and building synthetic identity services. It's infrastructure, not just tooling.
The MCP-powered stealer log analysis particularly caught my attention. The actor developed:
- domain categorization systems
- behavioral profiling based on browser usage patterns
- ranked interest lists
…all to optimize targeting for subsequent attacks. That’s using AI to create entirely new categories of sophisticated fraud.
The Detection Paradox
Here's what gives me a brain freeze…many of these operations were caught not because of superior defensive AI, but because the actors over-relied on AI and made distinctly human mistakes. The "Sneer Review" operation got busted partly because they asked Claude to write their own performance reviews that described their criminal activities. The North Korean IT workers were exposed through their terrible prompts that still yielded functional results.
Human laziness remains their best vulnerability, but it's also a temporary one. As these actors become more sophisticated in their AI usage, or as AI becomes more capable of operational security, this detection advantage disappears.
What This Means for Defense
The shift from enhancement to replacement changes everything about how we think about cybersecurity. When I analyzed OpenAI's report, the focus was on detecting AI-assisted operations. Anthropic's findings suggest we need to prepare for AI-native threats where the human is just the supervisor. It’s like an evil version of human-in-the-loop.
"...we need to prepare for AI-native threats where the human is just the supervisor."
Traditional threat modeling assumed some baseline of human capability and scaled defensive measures accordingly. That model breaks when AI provides unlimited capability to actors with minimal skills. We're not just defending against better-equipped criminals, we're defending against an entirely new category of threat actor that shouldn't exist but does.
The North Korean employment fraud alone demonstrates the strategic implications. These operations generate hundreds of millions annually for weapons programs while being conducted by people who can't independently write basic code. The scaling potential is terrifying when you consider how many more operations become viable when technical skill is no longer a constraint.
The Monitoring Blind Spot
Here's what really concerns me. Everything Anthropic documented involved hosted AI services where the providers could eventually detect and ban malicious usage. But what happens when criminals move to self-hosted AI tools where there's nobody monitoring behavior at all?
My recent analysis of DeepSeek ("Deepseek is wide open for abuse, here’s why that’s a problem") illustrates exactly why this matters. When Cisco's AI security team tested DeepSeek-R1 with 50 known jailbreak techniques, every single one worked. That's a complete absence of safety guardrails. Unlike the hosted services that detected and banned the actors in Anthropic's report, DeepSeek had a 100% failure rate against attacks that most AI companies have spent years defending against.
Think about that in the context of Anthropic's findings. The North Korean IT workers who needed AI assistance for basic tasks, the ransomware operator who couldn't implement encryption algorithms, the fraud operations requiring AI for every step…all of these operations currently depend on hosted AI services that can be monitored, detected, and shut down.
But DeepSeek (and other open-source models) eliminate that oversight entirely. The same dependency-driven actors Anthropic documented could operate with complete impunity using self-hosted AI that has no safety measures and no monitoring capability. The UK ransomware operator selling $400-1200 malware packages through Claude assistance could do the same thing locally with zero detection risk.
We're essentially watching the last line of defense (provider-level monitoring and intervention) disappear as capable but unsecured AI models become readily available for download and self-hosting. The detection advantages that helped identify operations in both the OpenAI and Anthropic reports simply don't exist in a self-hosted environment.
The Identity Crisis
The identity integrity issue hits harder after reading both reports side by side. OpenAI showed us fake journalists and influence operations. Anthropic shows us people maintaining full-time engineering positions at Fortune 500 companies while being functionally unable to perform basic job requirements without AI assistance.
This is now about fundamentally rethinking what human verification means when AI can simulate competence perfectly. The traditional assumption that someone can demonstrate relevant skills in real-time no longer holds when the AI is providing real-time assistance.
What Comes Next
Anthropic's report makes one thing clear to me. We're past the point where AI augments existing criminal capabilities. We're now seeing AI create criminal capabilities that didn't exist before, operated by people who couldn't have been criminals in the traditional sense.
The dependency patterns suggest this is still early stages. As these actors become more sophisticated in their AI usage (better prompting, more systematic operational security, integration across multiple AI systems), the detection advantages we currently have will erode.
More concerning is the scaling potential. Each successful AI-dependent operation proves the model works, encouraging more actors to attempt similar operations. The barriers to entry continue falling while the potential impact continues rising.
This isn't about preparing for the future of AI-powered cybercrime anymore. Based on what Anthropic documented, that future is already here. The question isn't whether AI will transform criminal operations, it's whether our defenses can adapt fast enough to address threats that fundamentally shouldn't be possible but increasingly are.
The criminals in Anthropic's report are becoming entirely different kinds of threats.
Everyone should take notice.
Explore More Articles
Say goodbye to Phishing, BEC, and QR code attacks. Our Adaptive AI automatically learns and evolves to keep your employees safe from email attacks.