I've been giving talks lately about the evolution of phishing attacks - tracking them from the Nigerian Prince emails of 1993 all the way to today's deepfake video calls. It's a fun trip down memory lane, filled with AOL phishing scams and QR code zebras (long story). But here's what I've realized...we're not just watching phishing evolve anymore. We're witnessing a full-blown renaissance of old-school attacks, and it's all thanks to AI.
The two attacks dominating both business and consumer inboxes right now? Credential theft and vendor scams. Classic stuff. Both are spoofing familiar businesses or people you know. Both are after the same thing - access to your accounts. The difference now is the personalization. It's freakishly good, and it's all thanks to AI making it easier than ever for attackers to research targets and craft convincing messages.
One lovely benefit of working in email security is how it's brought me closer to my family. I kid, but I seriously get one forwarded email every week from someone in the family to see if I think it's a scam. My wife is an artist, she gets fake supplier invoices or bogus notifications about bids and sales. My 70-year-old father-in-law collects ancient coins, and his phishing emails are all about certificate-of-authenticity notifications. He's also getting incredibly authentic-looking Social Security and military pension emails.
Nobody is immune. We're just at the beginning of this renaissance.
Let me set the stage with where we've been. The infamous Nigerian Prince scam hit inboxes around 1993, but the con itself goes back to the late 1800s. Back then it was called the Spanish Prisoner scam, it featured hand-written letters sent long distances to wealthy targets claiming a noble was wrongfully imprisoned and needed money for release. The payoff? Four to ten times what you sent.
People fell for it because letters were rare and expensive, so authenticity was assumed. They also placed high value on connecting with people of status (kind of like today's social media influencer worship, if you think about it).
By 1996, we got the first attack officially labeled "phishing," AOL account deactivation scams. These worked because everyone had AOL email addresses, they were easy to find, and spoofing email senders was trivial. Email authentication protocols like SPF and DKIM were still eight years away. Nobody knew to suspect the authenticity of emails. Everyone was vulnerable.
Then around 2013, Business Email Compromise emerged. Unlike earlier attacks that cast wide nets, BEC got personal. No malicious links or attachments, just malicious intent in the message body. Requests to update routing numbers, establish new vendor accounts, change payroll services. The formatting and language looked identical to legitimate internal communications. When you've got hundreds of vendors and thousands of invoices, it's easy to see how a spoofed message from a company executive or trusted vendor could slip through.
BEC attacks cost US businesses an average of $50,000 per incident. That number's probably low since many never get reported to the FBI.
By 2020, QR code phishing took off. QR codes had been around for years, but the pandemic made them ubiquitous for touchless everything. (And let's be honest, who doesn't want to know what this zebra has to say?) Attackers loved them because, just like with those first AOL phishing emails in 1996, nobody stopped to consider the risks. People abandoned the lessons learned over decades and gleefully scanned codes with their phones. This is exactly what some attackers want because it moves the attack forum away from protected corporate devices to relatively unprotected personal mobile devices where phishing pages might not get blocked and malware could be installed.
Then came November 30, 2022. ChatGPT launched, and everything changed.
Within a week of ChatGPT's public release, we saw a remarkable improvement in phishing email quality. Gone were the misspellings, grammar mistakes, and awkward business terminology. The emails looked complete and polished.
We tested it ourselves here at Ironscales. We took obvious phishing emails and asked ChatGPT to rewrite them. The results were incredible, like really incredible. While services like ChatGPT, Microsoft Copilot, and Llama have guardrails to prevent nefarious use, attackers simply created their own tools. Enter WormGPT, SpamGPT, and similar services, designed and optimized for doing bad things.
That personalization I mentioned earlier? Yeah, AI makes it effortless. Attackers can quickly research targets, understand communication patterns, identify relationships, and craft messages that sound exactly right. The 70-year-old coin collector gets certificate scams. The artist gets supplier invoices. The finance manager gets vendor payment updates that match the company's actual processes.
Right on the heels of GenAI-powered phishing came deepfakes. Look, I know everyone talks about the $25 million Arup case. Yes, that was a wake-up call and we've been banging that drum ever since. A finance worker in Hong Kong joined what appeared to be a legitimate Zoom call with the company CFO and other colleagues, all deepfakes. The presence of multiple "colleagues" created peer pressure that made the victim ignore his suspicions. He later said if it had been a one-on-one meeting, he would have called back on a different number to validate the request.
But that was years ago. Here's what's actually happening now, according to our recent annual research:
At $280,000, deepfake-related attacks now cost 5x more than BEC attacks.
(Curious about those $280,000 to $1MM numbers? I didn't make them up, they came from recent research - check it out, PDF is right here, no form fill)
This isn't a one-off anymore. It's not even rare. It's the new normal.
One of my colleagues tested this with his family. He created two deepfake videos of himself asking for urgent financial help and texted them to six family members. None figured out they were fake. All of them challenged him, insisting the videos were real and accusing him of playing games. One sister immediately texted back asking how much he needed and where to send it.
When he revealed it was a test, she said she was just focused on helping him out, concerned about his predicament. Never thought about it being a scam. Then she asked about the Instagram logo in the video, she'd assumed the logo meant it was genuine.
That's all it took. A badge on the screen.
But here's the thing: they're called best practices because they're known methods that are proven to work.
Here's what actually helps:
1. Multi-Factor Authentication (MFA) & Unique Passwords
Not exciting, but effective. Different passwords for different services. MFA everywhere it's offered.
2. Code Words for Verification
We use these at Ironscales, and I use them at home. It's really a simple concept...establish passwords, code words, or phrases in advance, then ask someone for the code word if something feels off. Our head of HR announced during an all-hands meeting that everyone would get an email about open enrollment and to look for a specific password before opening it. You can establish these for any sensitive transaction.
3. Verify Through a Separate Channel
CFO sends an urgent payment request via email? Call them back on their known number. Direct message on Slack asking for credentials? Walk over to their desk. It feels awkward, but it works.
4. Segregation of Duties (SoD)
Checks and balances have been around forever because they work. In traditional accounting, one person had access to the checkbook but couldn't sign checks. Another person could sign checks but didn't have checkbook access. A third person reconciled bank statements. Nobody could commit fraud alone.
Apply this same logic digitally. The person who can initiate wire transfers shouldn't be the same person who can approve them. The developer who writes code shouldn't be the only one who can push it to production. The admin who can update DNS settings shouldn't be the same person who manages email authentication.
5. Employee Awareness & Training
Yes, more vegetables. But we have data proving this works. Our customers who send phishing simulations and security awareness training videos see 3x improvement in reporting phishing emails and not falling for phishing tests. Real people are still the best defense (when they know what to look for).
This phishing renaissance isn't slowing down. AI has made it absurdly easy for attackers to create personalized, convincing attacks - and now they don't need skills to automate and scale them. The old techniques are back, they're better than ever, and nobody is immune.
The good news? The defenses work. They're not sexy. They're not cutting-edge. But they're effective (when people actually use them).
So yes, eat your vegetables. Brush your teeth. Use MFA. Verify through separate channels. Train your teams. It might not be exciting, but it beats explaining to your board how you lost $280,000 to a deepfake.
Onward!