Deepfakes have moved from an online curiosity to a significant security concern. In our recent survey of 207 IT and cybersecurity professionals, more than 74% said they’re very worried about what the future holds with this technology. And I don’t blame them. The idea that someone can mimic your boss's voice or send a video that looks just like your colleague, it’s unsettling, to say the least.
In my last blog, I talked about how deepfake social engineering is combining AI wizardry with classic phishing tactics to target organizations like never before. If you missed that one, catch up here—it’ll give you a solid foundation for why we’re ringing the alarm bell.
Our latest survey showed that 75% of respondents had experienced at least one deepfake-related incident in the last year. The chart below highlights which types of deepfakes companies have run into, with personalized phishing emails and static images leading the way. And let’s not forget—53% of security pros pointed to email as the biggest risk. Deepfake phishing emails make it nearly impossible to tell reality from manipulation.
Deepfakes might be getting smarter, but so can we. Let’s stay ahead by staying alert.
Want more on what we found? The full report digs into all the details, and you can check it out right here.