DeepSeek AI. What IT Security Leaders Need to Know

By now, you’ve probably heard about DeepSeek-R1, the open-source AI tool everyone is talking about. It’s being praised as the language model disrupter, capable of matching top-tier AI tools like OpenAI’s GPT-4, but at a fraction of the cost. Naturally, that’s led to some excitement about how organizations might use it to boost productivity or innovate.

But let’s take a beat. As an IT security leader, you’re likely asking the key question: What’s the catch?

While DeepSeek is impressive, it comes with serious privacy, compliance, and security risks that can’t be ignored. If leadership or employees in your organization are pushing to "try DeepSeek," here’s what you need to know before diving in.

What Makes DeepSeek Stand Out (and Raises Security Red Flags)?

Released in January 2025 (so yesterday), DeepSeek-R1 is a Chinese-developed AI model designed for reasoning, coding, and problem-solving tasks. As an open-source tool, it can be accessed via the web and it can be deployed locally, making it accessible to organizations of all sizes. However, DeepSeek comes with trade-offs that demand scrutiny by security professionals.

Let’s get into it.

Privacy Policy Concerns

A closer look at DeepSeek’s privacy policy raises some serious concerns and red flags for anyone concerned about data security and privacy. Some of the hits:

  • Keystroke Tracking
    DeepSeek collects “keystroke patterns or rhythms,” a type of behavioral biometric data that could expose employee credentials or other sensitive information. (This is different, but just as scary, as key logging. This is about behavioral biometrics…tracking how someone types, like speed, rhythm, and pressure, to uniquely identify them, even across devices or accounts without their knowledge.)
  • Data Storage in China: Any data shared with DeepSeek’s cloud platform is stored on servers in China, making it subject to local laws that could allow government access.
  • Indefinite Data Retention: Files and queries submitted to DeepSeek may be retained indefinitely and used to train their models, increasing exposure risks.

deepseek-privacy-policy

Local Deployment: Does It Solve the Problem?

DeepSeek’s local deployment capabilities allow organizations to use the model offline, providing better control over data. However, even with local use, there are lingering risks:

  • Hidden Telemetry
    Without a thorough code audit, it’s impossible to guarantee that telemetry (data sent back to the developer) is completely disabled.
  • Misconfigurations
    If local deployments are not configured properly, sensitive data could still be exposed.
  • Open-Source Challenges
    While open-source software enables flexibility, it also requires expertise to secure and monitor effectively.

Open-Source Enables Innovation, But Security Comes First

We’re strong believers in the power of open-source software. It’s the backbone of modern innovation, from Linux to Kubernetes to pfSense, and tools like DeepSeek demonstrate just how far it can push the boundaries of AI accessibility.

But in enterprise environments, openness can’t come at the expense of security. Tools that handle sensitive data must be carefully governed. Misconfigurations or hidden vulnerabilities in open-source models can lead to breaches that far outweigh any initial benefits.

At IRONSCALES, our approach to AI reflects these realities. We combine machine learning with Human-in-the-Loop (HITL) oversight, ensuring that our models are:

  • Trustworthy: Designed to adapt safely to real-world threats like phishing and social engineering.
  • Accurate: Prioritizing precision in environments where mistakes can have high stakes.

While tools like DeepSeek focus on efficiency, we prioritize responsibility first—because innovation without governance isn’t an option in mission-critical settings.

If You’re Being Pushed to ‘Try DeepSeek,’ Here’s What to Do

If your leadership or employees are eager to "try DeepSeek," it’s important to slow things down and evaluate the risks. Below is some guidance you can use to protect your organization while deciding whether tools like DeepSeek are a good fit.

Step 1: Block and Contain Access

  • Website Access
    Use your Secure Web Gateway (SWG) or firewall to block access to DeepSeek’s website, app, and API endpoints.
  • Install Restrictions
    Deploy endpoint protection tools to block unauthorized downloads or installations of DeepSeek’s local version.
  • Network Monitoring
    Analyze outbound traffic for attempts to access DeepSeek’s cloud servers or APIs.

Step 2: Perform a Thorough Risk Assessment

  • Review DeepSeek’s privacy policy and evaluate its compliance with regulations like GDPR, CCPA, or HIPAA.
  • If local deployment is being considered, conduct a code audit to identify any telemetry or vulnerabilities. Use monitoring tools to verify offline operation.

For IT teams conducting this analysis, here are the key resources to access and inspect DeepSeek-R1:

  1. GitHub Repository
    DeepSeek’s main source code is available on GitHub: https://github.com/deepseek-ai/DeepSeek-R1
  2. Hugging Face Models
    You can find the model weights and configurations here: DeepSeek on Hugging Face
  3. Technical Paper
    DeepSeek’s architecture and training details are explained in their technical paper: DeepSeek_R1.pdf

Step 3: Establish Temporary Policies

  • Restrict the use of all unapproved AI tools (including DeepSeek) until a full review is completed.
  • Share policies with employees, outlining acceptable use and approval processes for external AI tools.

Step 4: Communicate Proactively

Email to Employees

Subject: Temporary Restriction on DeepSeek AI While Under IT Security Review

Dear Team,

We know there’s a lot of buzz around DeepSeek-R1, and some of you might be curious about using it for work. However, to protect our data and systems, we’re temporarily blocking access to DeepSeek while the IT Security team conducts a full review of its privacy, security, and compliance implications.

Here’s what you need to know:

1. Temporary Restrictions: Access to DeepSeek’s website, app, and cloud services is currently blocked on company networks.

2. No Local Installations: Please don’t install or use any version of DeepSeek on company devices until we give the green light.

If you have any questions, contact [IT Support]. Thanks for your patience as we do our due diligence.

 

Memo to Leadership

Subject: Temporary Restrictions on DeepSeek AI: Full Review in Progress

Dear Leadership Team,

DeepSeek-R1, the open-source AI model released earlier this month, is generating significant interest due to its capabilities and accessibility. While its release presents exciting opportunities for innovation, it also introduces potential security and compliance risks that must be carefully evaluated before any use within our organization.

Preliminary Concerns:

- Behavioral Tracking: DeepSeek collects sensitive telemetry (e.g., keystroke patterns).

- Data Storage in China: User data is stored on servers in China, raising compliance concerns.

- Telemetry in Local Deployments: Misconfigurations could lead to unintended data exposure, even in offline mode.

We have temporarily restricted access to DeepSeek while we conduct a detailed analysis of its privacy, security, and compliance implications.

Thank you for your understanding.

 

Balancing Innovation with Responsibility

Open-source tools like DeepSeek offer immense potential, but they also highlight the need for security-first governance in enterprise environments. By taking a cautious, informed approach, you can enable innovation while protecting your organization’s most critical assets.

Further Reading and Resources

 

Explore More Articles

Say goodbye to Phishing, BEC, and QR code attacks. Our Adaptive AI automatically learns and evolves to keep your employees safe from email attacks.