What DeepSeek’s deep security flaws say about being more discerning users of AI

March 7, 2025
AI innovation doesn’t need to slow down; we need to demand better security from the start.

If you’ve been following the AI space, you’re familiar with DeepSeek-R1, a new open-source generative AI model from China. It’s being hailed as a major breakthrough, with performance comparable to OpenAI’s latest models. It has gained rapid adoption because it’s open-source, widely accessible, and remarkably efficient.

Yes, it is indeed a groundbreaking accomplishment. Working with roughly 6% of the budget, one-tenth the computing power, and GPUs a full generation behind those used by their Western competitors, DeepSeek created a generative AI solution on par with the best from companies like OpenAI, Meta, and Microsoft.

But DeepSeek hasn’t just pushed boundaries in AI performance; it has also raised serious security concerns. From privacy risks to a lack of established safety guardrails, it highlights how AI adoption without proper security can backfire in ways we can’t ignore. Regarding DeepSeek, in one of my recent blogs, I said, “Their AI model isn’t just vulnerable, it’s practically designed for abuse.”

DeepSeek’s Privacy Risks Are Built In

A closer look at DeepSeek’s privacy policy immediately raises red flags. The company collects keystroke patterns, a form of behavioral biometrics that, while not the same as keylogging, could still be used to identify users uniquely. It retains data indefinitely and stores it on servers in China, subjecting user information to local regulations that could allow government access. And no, simply installing it locally doesn’t eliminate these risks unless you fully audit the working software and confirm there’s no hidden telemetry.

This isn’t just a compliance issue; it’s a direct risk to enterprise security. AI tools that log user interactions, store them indefinitely, and operate in foreign jurisdictions introduce significant data exposure risks that organizations must assess before adopting.

A 100% Jailbreak Success Rate? That’s a Problem

Most AI models can be jailbroken with enough effort. But with DeepSeek, it requires no effort at all. None.

Researchers at Cisco and the University of Pennsylvania tested DeepSeek-R1 using 50 standard jailbreak techniques. Every single one worked, which is a 100% failure rate against attacks that most AI companies have spent years defending against.

This means bad actors can use DeepSeek to generate misinformation, automate phishing attacks and cybercrime, and create exploit code with no restrictions. Unlike OpenAI, Anthropic, and other leading AI companies investing in real-time red-teaming and safety layers, DeepSeek lacks the safeguards to prevent abuse.

A closer look at DeepSeek’s privacy policy immediately raises red flags. The company collects keystroke patterns, a form of behavioral biometrics that, while not the same as keylogging, could still be used to identify users uniquely.

It’s not that DeepSeek is uniquely vulnerable; all AI models face these challenges. The difference is that DeepSeek doesn’t even try to stop it. My guess is that they’ll address this eventually.

DeepSeek’s Own Infrastructure Wasn’t Secure

As if the jailbreak issue wasn’t bad enough, DeepSeek also failed at basic cybersecurity hygiene.

Researchers at Wiz found a completely exposed DeepSeek database sitting on the internet with no authentication, no firewall, nothing. The leaked data included:

●     User chat logs (potentially containing sensitive information)

●     API authentication tokens (which could be used for unauthorized access)

●     Internal system logs (a roadmap for attackers)

This wasn’t a “sophisticated hacking” but an entirely preventable oversight. Worse, DeepSeek had no straightforward security disclosure process, forcing researchers to spam their employees on LinkedIn and guess email addresses to get them to secure the database.

If an AI vendor can’t even protect its infrastructure, should organizations trust it with theirs?

AI Adoption Shouldn’t Mean Ignoring Security

DeepSeek’s security failures highlight a more significant industry trend: AI companies prioritize speed over security. AI models are being pushed to market at alarming speed, often before proper safeguards are in place.

The pattern is predictable:

●     AI models launched before they’re thoroughly tested

●     Security researchers expose vulnerabilities

●     Vendors scramble to patch issues

●     The cycle repeats, at higher stakes each time

The problem isn’t just DeepSeek; it’s the wider AI industry’s approach to security as an afterthought.

A Practical Approach to Adopting New AI

AI is evolving fast, and while that brings huge potential, security can’t be an afterthought. Whether your organization is actively exploring AI tools or keeping an eye on industry trends, the key is evaluating them with security in mind (not just performance).

●     Vet AI models carefully. Just because a tool is powerful doesn’t mean it’s secure. Before integrating it into your environment, ask questions about data retention, security controls, and governance policies.

●     Test for vulnerabilities before deployment. Don’t assume an AI tool is safe out of the box. Red team it, sandbox it, and analyze its responses (and telemetry) for potential risks.

●     Be curious but cautious. AI adoption is exciting, but rushing into it without security can create long-term risks that are hard to undo.

This guide provides practical steps for organizations already considering DeepSeek to mitigate risk and control rogue usage. From blocking access to enforcing clear security policies, there are ways to explore new AI tools without exposing your organization to unnecessary risk.

While the constant media coverage and rapid evolution might make new AI offerings feel           ”ready,” it’s often a risky and untested frontier in many ways. Unfortunately, that means security leaders must cut through the hype, ask the hard questions, and assess risk before adoption, not after.

We don’t need to slow down AI innovation; we must demand better security from the start. The organizations that get this right will not just protect themselves; they’ll be shaping the future of AI in a way that’s both responsible and resilient.

About the Author

Audian Paxson | Principal Technical Strategist at IRONSCALES

Auduan Paxson is a Principal Technical Strategist at IRONSCALES. He is a recognized enterprise IT infrastructure and cybersecurity authority with over 20 years of experience driving cloud security and advanced threat protection innovation. He holds three USPTO patents focused on groundbreaking advancements in enterprise security and is known for his expertise in leveraging AI to counter emerging threats. Audian’s deep understanding of the rapidly evolving threat landscape and his ability to bridge technical insights with practical applications make him a sought-after voice in the cybersecurity industry.