• Why IRONSCALES
  • Platform
    Spring '24 Software Release! Check out our new deep image-based detection, GWS capabilities, and more. Explore the new additions
  • Solutions
  • Learn
    New Report! Osterman Research releases their 2024 findings on Image-based/QR Code Attacks. Read the report
  • Partner
  • Pricing

AI is eating the world. That may be extreme, but you can't scroll through your LinkedIn feed without seeing tips for using AI to accomplish a task or read an AI-generated post. The truth of it is AI is rapidly changing the world. And the ways people are figuring out how to harness the value of AI are just as innovative. I've seen prompts for creating workout routines, meal plans, grocery lists, entire novels, and business strategies.  

However, AI can also be used for more sinister applications. Earlier this week, the "Godfather of AI," Geoffrey Hinton, left his role at Google to speak out about the dangers of AI, a technology that he helped develop.  

Leading email security providers have leveraged AI for years to detect and automatically remediate advanced phishing threats. However, with the recent widespread availability of AI comes the use of the technology for cyberattacks. In this post, we'll discuss how AI is changing email security and provide tips for combating these emerging threats.  

The Threat of AI-Powered Phishing Attacks  

With the application of AI, phishing is likely to not only remain the leading vector for cybercrime but is also expected to increase. One form of phishing that will likely increase are BEC attack variants. A recent Osterman Research report notes that "large organizations anticipate a 43.3% increase in the threat created by BEC attacks in the next 12 months."   

Unfortunately, AI-powered phishing attacks have already started. Below are just a few examples of how AI is fueling an increase in successful attacks.  

Advanced Phishing Attacks

Phishing emails are getting harder to detect. One of the reasons why this is occurring is that cybercriminals are using sophisticated tactics to identify and bypass vulnerabilities in traditional email security solutions. By deploying strategies like using a BEC attack which often relies on social engineering vs. a malicious link or attachment, cybercriminals can avoid detection by secure email gateways and native email security controls. Poor grammar and obvious misspellings used to be a good indicator that an email was malicious. However, with AI, criminals can eliminate that issue with minimal effort. In this video, Audian Paxson breaks down how easy it is for a cybercriminal to create a convincing email in just a few seconds using ChatGPT.  

Taking it one step further, bad actors can now effortlessly launch highly targeted spear phishing campaigns by asking ChatGPT to review social media profiles to learn an individual's communication patterns and craft a message that could be used to persuade a target to share login information, data, or start a transaction.  

Increase in Deepfakes

More alarming, however, is how easy it is to create deepfakes using AI to enhance the attack and increase the perceived legitimacy of the message. With tools commercially available to clone someone's voice with just a few audio clips, bad actors can use these to carry out new and more convincing BEC attacks.

A few years back, cybercriminals used AI-voice technology to convince the CEO of a UK-based energy firm—believing it was their bossto transfer $243,000 to the bank account of a Hungarian supplier. More recently, bad actors used AI-generated voice to simulate the voice of a loved one in distress to scam vulnerable people out of thousands of dollars.  

Combating AI-Powered Phishing Attacks with AI and Human Insights  

While AI may have inadvertently sparked an evolution in phishing, at the same, it has led to an improvement in an organization's ability to detect and remediate advanced phishing attacks efficiently. With AI-powered email security, organizations automatically detect up to 99% of advanced phishing attacks. However, by combining AI-based phishing detection with a focused security awareness program, organizations can address 100% of phishing attempts while using human insights to strengthen the AI's ability to detect zero-day attempts.  

IRONSCALES combines AI with human insights to detect and respond to phishing attacks. The platform uses machine learning algorithms to analyze incoming emails and identify potential phishing attacks. If a phishing attack is detected, the platform alerts a team. 

 Request a demo to learn more about how IRONSCALES protects enterprise organizations from advanced threats.

Jeff Rezabek
Post by Jeff Rezabek
May 4, 2023