• Why IRONSCALES
  • Platform
    Spring '24 Software Release! Check out our new deep image-based detection, GWS capabilities, and more. Explore the new additions
  • Solutions
  • Learn
    New Report! Osterman Research releases their 2024 findings on Image-based/QR Code Attacks. Read the report
  • Partner
  • Pricing

For good and bad, AI is now more accessible and used for things from writing articles to identifying phishing threats. For anti-phishing, AI accessibility is beneficial for building better cyber security solutions, automating the investigation and response processes, and increasing prevention and detection—resulting in the ability to identify and mitigate massive volumes of phishing threats quickly.

The downside of AI being a commodity these days is that this technology is now available for adversaries, the cybercriminals. Making AI a double edge sword. One of the ways cybercriminals are leveraging AI for their attacks is by using Deepfake technologies to deceive recipients.

The Dangers of Deepfake Phishing

Combining Deepfake Technologies

Deepfake technology uses AI to create fabricated content such as video or photo impersonations, fake or cloned voices, image, or email. It makes it look like the real thing, it has the proper context, and it sounds or reads like a legitimate message.

Now imagine a combined attack where the threat actor impersonates a CEO with an email to accounting to create a new vendor account to pay a fake invoice, followed up by a fake voicemail with the CEO’s voice acknowledging the authenticity of this email (a.k.a. pretexting), like in this example:

 

 

Sounds surreal, right? Well, we have seen each one of these launched independently. History tells us that it is only a matter of time before threat actors combine phishing and pretext methods to deliver compelling, coordinated, streamlined attacks.

As much as we want to think we have our personal information under control, a frightening amount of our personal data is now publicly accessible over social media and other places on the web. This data can be easily harvested, correlated (using automation), and put into context using sophisticated models designed to look for opportunities to create highly targeted attacks.

Era of Phishing 3.0?

This could be a scary evolution of Business Email Compromise (BEC) attacks and the beginning of a new era—Phishing 3.0.

Traditional, signature-based tools—like Secure Email Gateways (SEG)—already struggle to address BEC attacks, account takeovers, and polymorphic attacks. They’ll really be useless in the fight against Deepfake phishing attacks. The potential volume and sophistication of these Deepfake phishing attacks will overwhelm security teams that are already stretched thin.

3 Tips to Combat Deepfake Phishing

1. Fight AI with AI

Manual playbooks and step-by-step automation and orchestration solutions will continue to fall behind.
Security professionals are increasingly turning to AI-based tools to prepare for these types of emerging threats.

AI speed and computation capabilities outperform what the human brain is capable of, but it still lacks the logical judgment that the human brain encompasses.

2. Train Your Users

AI is not a silver bullet. It is a tool. There will always be attacks that bypass even the best AI models out there. AI/ML-based solutions are just as good as the data we feed into them, so fighting against AI that creates and manipulates new datasets can be challenging.

Therefore, the human element continues to be a significant protection strategy against social engineering and phishing threats.

It’s important that users are trained to understand that the new era of phishing is more sophisticated and deceptive than before. User need training that leverages real-world examples of phishing attacks as well as highly personalized simulated phishing emails to reduce the chances of them falling for a real attack—because every IT/Security pro knows that no single technology is 100% perfect.

3. Collaborate

Fighting against phishing is not easy when done in siloes. Real-time information about trending attack strategies must be communicated effectively across security teams to break the asymmetric nature of attackers and defenders. We must collaborate in an automated and seamless way to ensure that we don’t fall victim to attacks that are already well-known in the wild. We must keep safer together.

Download the Osterman Report, “The Business Cost of Phishing,” to see more email security trends and data.

Eyal Benishti
Post by Eyal Benishti
November 8, 2022