Blog

Zoom's AI Avatar Watermark Is Security Theatre (And Attackers Already Know It)

Written by Audian Paxson | Nov 20, 2025

Zoom CEO Eric Yuan recently used his AI avatar to open a quarterly earnings call. In the top right corner of the video, a small badge appeared: "CREATED WITH ZOOM AI COMPANION."

Presumably, this watermark is meant to reassure viewers. "Don't worry, you'll know when you're talking to an avatar instead of a real person."

Here's the problem...anyone with basic skills can create that badge in about 30 seconds. I know because I tried it. And adding it as a watermark to a video stream isn't a high bar for motivated attackers.

Zoom just announced they're rolling out custom AI avatars in early 2025 (photorealistic digital versions of employees that can deliver scripted messages in Zoom Clips). You record yourself once, then type any message you want, and your avatar speaks it with your face, voice, and mannerisms.

The technology itself isn't the threat. But to me, the security measures around it are security theatre. And that theatre creates something far more dangerous than the avatars themselves...false confidence.

Fake Security Measures Are Worse Than No Security Measures

Let's be clear about what Zoom's actually launching, scripted avatar clips. You type a message, your avatar delivers it. This isn't autonomous AI attending meetings on your behalf (that's Yuan's stated vision for 5+ years down the road). What's coming in early 2025 is essentially a really sophisticated teleprompter that looks like you.

I guess the watermark is supposed to solve the authentication problem. When you see that badge, you know you're watching an AI-generated clip instead of a live person.

Now, it's possible Zoom has more robust security measures behind the scenes (cryptographic signing, metadata verification, chain-of-custody tracking). They may even implement these before launch. But still...the visible indicator is still just pixels on a screen.

Which means an attacker creating a deepfake video can add the exact same badge. Same font. Same placement. Same wording.

Even if Zoom builds perfect technical authentication, most users won't verify the cryptographic signature or check the metadata. They'll see the badge and trust it. The visual chrome becomes the security measure people actually rely on, regardless of what happens underneath.

And that visual indicator is trivial to replicate. The badge that's supposed to warn you about synthetic media becomes the thing that makes you trust synthetic media.

This creates a more dangerous environment than if the badge didn't exist at all:

  • False confidence (at scale)
    Users learn to associate the badge with "verified avatar" instead of "potentially spoofed content." The watermark trains people to trust the wrong signal.
  • Legitimized deception
    Attackers don't need to bypass your security measures...they just need to copy your security measures. The badge becomes authentication instead of disclosure.
  • Reduced vigilance
    When users think they have a reliable way to identify avatars, they stop applying other verification methods. "It had the badge, so I knew it was legitimate" becomes the explanation for why due diligence didn't happen.

The Power Dynamics Problem

There's another layer to this that technical controls can't solve, organizational hierarchy.

Even if an employee notices something off about an avatar message from a senior executive, will they actually challenge it?

In the Arup deepfake incident (where attackers used deepfake video calls to steal $25.6 million from a multinational engineering firm), employees participated in a video conference with what appeared to be their CFO and other executives. The attack worked despite some employees having initial suspicions, in part because challenging senior leadership during a video call requires confidence that many employees simply don't have.

Add a reassuring "CREATED WITH ZOOM AI COMPANION" badge, and that last bit of hesitation evaporates. The badge provides psychological permission to not ask questions. "Of course it's legitimate...see the official marker?"

This dynamic becomes especially problematic in organizational cultures where:

  • Subordinates are expected to defer to authority
  • Questioning leadership is viewed as insubordination
  • Speed and responsiveness are valued over verification procedures
  • Remote work has normalized unusual communication patterns

An attacker doesn't need to bypass your technical security measures. They just need to give your employees permission to ignore their own instincts. A professional-looking badge accomplishes exactly that.

Normalized Behavior Is the Real Threat

The bigger problem isn't the technology...it's the legitimization. When major platforms like Zoom make avatar use normal business practice, they create the perfect environment for social engineering:

Lowered suspicion. Once employees get used to receiving avatar clips from colleagues, the bar for skepticism drops. "The CEO sent an avatar message about updated payment procedures" becomes unremarkable. Attackers don't need sophisticated bypasses... they just need to blend into accepted practice.

Expanded attack surface. Right now, if you receive an unexpected video message from your CFO, it's novel enough to trigger questions. After avatar adoption, that same message fits established patterns. The unusual becomes routine.

Plausible deniability. When an employee acts on fraudulent instructions delivered via avatar, who's responsible? Was it a legitimate avatar? A spoofed avatar? A deepfake with a fake badge? The authentication line just got very blurry, and that ambiguity favors attackers.

And this isn't limited to Zoom. Third-party tools like HeyGen already let users create interactive avatars that can join video calls in real-time. Microsoft and Google will almost certainly launch their own versions. The technology is coming from every direction, with varying degrees of security consideration.

What Real Security Would Look Like (and why we don't have it)

Zoom mentions "advanced authentication, watermarking technology, and strict usage policies." HeyGen talks about safeguards. The details remain vague. Here's what actual security for avatar technology would want to require:

Cryptographic signing. The watermark should be cryptographically signed and verifiable, not just visual pixels that can be copied. Users should be able to verify the chain of custody from avatar creation to message delivery.

Biometric enrollment verification. Creating an avatar should require multi-factor authentication and live verification that the person enrolling is actually the account holder (not just someone with access to recorded footage).

Transparent provenance. Every avatar message should include verifiable metadata. When it was created, who authorized it, what verification methods were used. This data should be tamper-evident and independently auditable.

Revocation capability. If an avatar gets compromised or an employee leaves the org, there needs to be a reliable way to invalidate that avatar and prevent its future use.

AFAIK, none of these technical measures currently exist in announced implementations. And even if they did, there's a fundamental problem... most users won't verify cryptographic signatures or check metadata chains. They'll look at the visual indicator for go / no-go.

Which means attackers will keep replicating the visual chrome, and users will trust it.

What This Means for Your Org

Three immediate implications:

1. Visual watermarks are not authentication

Treat the badges like the "CREATED WITH ZOOM AI COMPANION" the same way you'd treat an email signature or a professional-looking letterhead - it's trivial to fake and provides zero verification. Train employees that watermarks indicate disclosure, not authenticity.

2. Video content is now low-trust by default

Whether it's a Zoom Clip, a recorded message, or an embedded video in email...assume synthetic media is possible. Establish verification protocols for any request delivered via video that involves financial transactions, credential changes, or sensitive data access.

3. The "I saw it on video" defense is dead

When an employee acts on instructions delivered via avatar message, "but it looked exactly like our VP" isn't sufficient due diligence anymore. You need separate verification channels for high-risk actions, regardless of how convincing the video appears.

Why Security Theatre Makes Everything Worse

This isn't fearmongering about hypothetical threats. Organizations have already lost significant money to deepfake attacks...85% of organizations experienced at least one deepfake incident according to recent research from 500 ITSec leaders, with average losses of $280,000 per incident.

That was before mainstream platforms legitimized the technology.

Here's what changes when Zoom, Microsoft, and Google all launch avatar features:

Attacks get easier to execute. Instead of explaining why a deepfake video exists, attackers can point to widespread avatar adoption. "Of course I sent an avatar message...everyone does now."

Detection gets harder. Security teams can't flag all avatar usage as suspicious when it's standard business practice. The signal-to-noise ratio just got much worse.

Training becomes contradictory. How do you tell employees "avatars are normal and convenient" while also saying "don't trust avatar messages for sensitive requests"? That's a confusing message, and confused employees make mistakes.

The companies building these tools are solving real problems...meeting fatigue, time zone challenges, communication efficiency. But every feature that makes communication more convenient also makes impersonation more convenient. And when the security measures are security theatre, the convenience comes at the expense of authentication.

What to Do Now

Here's what I'd recommend before avatar adoption becomes widespread:

Five practical steps:

  1. Establish that watermarks are not verification. Update your security training to explicitly address visual indicators like badges and watermarks. Make it clear these are disclosure mechanisms, not authentication mechanisms... and that they can be replicated by attackers.

  2. Create avatar-specific verification protocols. For any sensitive request delivered via avatar message or video clip (not just live calls), require secondary verification through a different channel. Phone callback, in-person confirmation, or multi-factor approval workflows.

  3. Build safe escalation paths. Give employees a way to verify unusual requests from leadership without directly challenging executives during a video call. A confidential verification line, a designated security contact, or a "callback protocol" that doesn't require employees to accuse anyone of fraud in real-time.

  4. Document your risk tolerance. Decide which use cases for avatars are acceptable in your organization and which aren't. "Avatars for training videos" might be fine. "Avatars for payment authorization" probably isn't. Make these boundaries explicit.

  5. Monitor for social engineering patterns. Watch for requests delivered via video that create urgency, bypass normal processes, or involve financial transactions. These are the scenarios where avatar technology becomes an attack vector.

The question isn't whether avatar technology will become normal. It will. Zoom's launching in early 2025. Microsoft and Google will follow.

The question is whether your verification protocols will adapt before the attacks start...and whether you'll rely on security theatre or actual security to protect your organization.

 

Want to see how organizations are handling deepfake threats in real-world scenarios? Our 2025 Deepfake Threat Report covers attack patterns, financial impact, and defensive strategies from 500 enterprise IT Security leaders. Download the report.