I’ve been tracking cybercrime for decades, but I’ve never seen the velocity of innovation from bad actors match what we’re experiencing today. A recent report from the Israel Internet Association (ISOC-IL) on “Algorithmic Scams” has uncovered hundreds of incidents where generative AI was leveraged to produce highly realistic deepfake video and image ads on platforms like Facebook, Instagram, and TikTok, often featuring prominent public figures encouraging viewers to “invest” in fraudulent schemes.
What’s remarkable (and deeply concerning) is that these campaigns don’t stop at social media. The same criminals recycle the personas, scripts, and AI‑generated content into spear‑phishing emails, business email compromise (BEC) attempts, and voice scams targeting both consumers and enterprises.
What starts as a consumer scam on Instagram can easily morph into a tailored, CEO‑impersonation email hitting your CFO’s inbox.
ISOC‑IL researchers mapped a four‑step fraud funnel that looks chillingly similar to the kill chains we defend against in enterprise phishing:
We’ve already seen cases where initial targeting data from social platforms fuels precision phishing inside corporate environments.
The weaknesses that let deepfake ads bypass review are the same weaknesses that let targeted phishing emails slip past your filters.
Traditional email security tools rely heavily on static rules, domain reputation, and pattern‑matching. We know they miss well‑crafted spear‑phishing.
Social platforms face the same weakness. ISOC‑IL found that scammers use Dynamic Ads and A/B testing to cloak their true content from automated review systems, showing benign ads to moderators and fraudulent ones to targeted victims.
Example of an AI‑generated deepfake investment ad disguised as a news interview. Targeted victims see the fake investment pitch (right), while moderators are shown a harmless product ad (left).
Source: ISOC‑IL “Algorithmic Scams” report.
This is conceptually identical to phishing redirect chains in email: what the security system “sees” is not what the end user experiences.
And now, attackers are pairing these cloaking tricks with another dangerous upgrade, AI‑generated deepfake media that can convincingly impersonate people you know and trust.
Today’s fraudsters routinely embed AI‑generated deepfake media, from fabricated executive “video interviews” to cloned voices, into their scams. Detecting these requires advanced biometric analysis to catch facial inconsistencies, lip‑sync errors, and synthetic speech patterns. Most ad‑review and email‑security stacks still can’t reliably flag synthetic personas in motion or voice, leaving a critical blind spot.
In my own practice, I’m advising CISOs to treat this as a cross‑channel threat. Email, web, and social vectors are now part of one attack continuum.
Defending against AI‑powered scams means connecting the dots across email, social media, and the web, because the attackers already do.
My practical steps include:
The ISOC‑IL “Algorithmic Scams” report is a wake‑up call for CISOs, not just for consumers, because the same techniques are being repurposed for corporate targets.
I believe we are witnessing the industrialization of AI‑powered fraud. The ISOC‑IL “Algorithmic Scams” report is a wake‑up call. Not just for consumers, but for enterprise security leaders who need to understand how these same techniques are being repurposed for corporate targets.
If you’re a CISO, ask yourself:
I’d love to hear from fellow CISOs. What’s working in your organization? Join the conversation and share your thoughts in the comments on my LinkedIn.