Cybercriminals just got their own HubSpot (for less than the price of a used car).
Last month, I wrote about Anthropic's findings on how AI enables attacks without expertise. That assessment wasn't theoretical - it's now commercial reality. With ChatGPT, Claude, and most concerning (to me at least) DeepSeek and all the abuse risks that come with it, it was only a matter of time before criminals weaponized them. Now the bad guys have SpamGPT, a $5,000 phishing platform that makes sophisticated email attacks as easy as scheduling your next marketing campaign.
No, this isn't another "AI is scary" piece. This is about a fundamental shift in the economics of cybercrime. When professional-grade phishing tools get packaged with the user experience of Mailchimp, we need to rethink things.
Let me explain...
Let's separate what's verified from what's marketing hype.
The verified facts:
The vendor's claims (unverified):
The critical unknown:
No security research team has publicly attributed a real-world campaign to SpamGPT yet (big "yet). The underground chatter is there, the product demos are there, but confirmed incidents in the wild? Not documented.
Note: this absence of proof isn't comfort, to me it's the calm before the storm.
Remember when phishing required actual skill (let's be honest, it did)? You needed to understand SMTP protocols, manage infrastructure, write convincing copy, and handle the technical logistics of campaigns. That barrier to entry just collapsed with this and all the GenAI chat tools. It's gone, like you don't have to have any technical skills now.
SpamGPT applies the marketing automation playbook directly to crime:
Deliverability testing before launch. That "Inbox Check" module runs test sends to verify messages land in inboxes, not spam folders. Legitimate marketers do this. Now attackers do too, with the same point-and-click simplicity.
A/B testing for fraud. The platform tracks open rates, clicks, and engagement. Attackers can test ten subject lines, see which performs best, then scale the winner. Your marketing team's optimization strategy, weaponized.
Analytics dashboards for criminal ROI. Real-time campaign monitoring shows which lures work, which domains are burned, which targets engage. It's data-driven crime with really solid reporting.
The connection to my Anthropic piece is stark. We worried about AI helping criminals write better emails, and we've already moved past that. Now AI manages their entire campaign infrastructure, from content generation to delivery optimization to performance analytics.
SpamGPT is particularly dangerous because it's specifically designed to exploit the documented weaknesses in Secure Email Gateways. It streamlines the testing and work needed to bypass SEGs, which isn't super hard to do today, but this automates and scales the process.
Recent data from nearly 2,000 organizations shows that SEGs miss an average of 67.5 phishing emails per 100 mailboxes monthly. The attacks that slip through? Vendor scams (30-42%) and credential theft (21-41%) - exactly the kind of socially engineered, text-based attacks that SpamGPT's AI excels at creating. SEGs were built for an era of malicious attachments and suspicious links. They struggle with attacks that rely on psychology rather than payloads.
SpamGPT's features read like a checklist for SEG evasion:
The platform essentially gamifies SEG bypass. Attackers can iterate rapidly, testing variations until they find what lands in inboxes. For organizations still relying solely on perimeter defenses (ahem...SEGs), this represents a massive mismatch with static rules versus dynamic, AI-powered optimization.
SpamGPT's campaign creation interface enables easy sender spoofing and header manipulation
Here are three attack patterns SpamGPT enables - none requiring traditional hacking skills:
Attackers use the Inbox Check module to test their CEO impersonation emails against various filters. Once they achieve consistent inbox delivery, they launch targeted campaigns against finance teams. The custom header configuration lets them fine-tune sender authentication to pass basic checks. Twenty SMTP accounts means twenty different attack vectors if one gets blocked.
KaliGPT generates hundreds of contextually relevant phishing emails - each slightly different, all optimized for engagement. The campaign management system distributes these across multiple SMTP servers, tracks which messages get clicks, and automatically scales successful templates. It's the same iterative improvement process your marketing team uses, except the conversion goal is stolen credentials.
Those X-header customization options are spoofing tools. Attackers can create emails that appear to come from trusted vendors, complete with headers that match legitimate communication patterns. Combined with AI-generated content that mimics vendor communication styles, these attacks bypass both technical controls and human skepticism.
Forget the theoretical defense stuff, here's a few things that needs to happen:
SpamGPT's domain spoofing features hunt for permissive policies. Those "multiple sender identities" in the screenshots I shared earlier? They're shopping for domains like yours. If you're not at p=reject, you're could soon be on their target list. Most organizations aren't there yet. Be the exception.
Don't get put off by what sounds like a "best practice" - they're called that because they're KNOWN methods that work.
Simply put, division of responsibilities (DoR) requires at least two people to complete any sensitive action. One person has the ability to initiate something but doesn't have the permissions to execute. Another person has the ability to execute but doesn't have the permissions to initiate.
So no single email should trigger:
SpamGPT's AI crafts context-aware requests that perfectly mimic internal communications. When an executive emails about "adding a new vendor," two other people have to be involved to make that happen. Make this your default process, not your exception.
Modern email security needs an adaptive approach that goes beyond static rules. The IRONSCALES platform uses NLP and natural language understanding to build detailed social graphs of normal communication patterns for every inbox, creating personalized baselines that detect even subtle anomalies.
This adaptive AI continuously evolves through a human-in-the-loop process, incorporating feedback from over 30,000 security professionals globally to identify never-before-seen threats. The platform clusters attacks from the same campaign, even when they use different sender addresses, subject lines, content, or originate from different SMTP servers - catching the infrastructure patterns that SpamGPT's 20-account rotation system creates.
SpamGPT represents the industrialization of phishing. I expect to see many more of these offerings. The defensive implications are clear...assume zero-skill attackers now have professional-grade capabilities. The teenager in their bedroom and the organized crime syndicate now have access to the same tools. Your defense strategy needs to account for this new reality.