SAN FRANCISCO -- Cloudflare, Inc., the leading connectivity cloud company, today announced the development of Firewall for AI, a new layer of protection that will identify abuse and attacks before they reach and tamper with Large Language Models (LLMs), a type of AI application that interprets human language and other types of complex data.
Backed by the power of Cloudflare’s global network, one of the largest in the world, Firewall for AI will position Cloudflare as one of the only security providers prepared to combat the next wave of attacks in the AI revolution – those targeting the functionality, critical data, and trade secrets held within LLMs.
A recent study revealed that only one in four C-suite level executives have the confidence that their organizations are well-prepared to address AI risks. When it comes to protecting LLMs, it can be extremely challenging to bake in adequate security systems from the start, as it is near impossible to limit user interactions and these models are not predetermined by design – e.g., they may produce a variety of outputs even when given the same input. As a result, LLMs are becoming a defenseless path for threat actors – leaving organizations vulnerable to model tampering, attacks and abuse.
"When new types of applications emerge, new types of threats follow quickly. That's no different for AI-powered applications," said Matthew Prince, Co-Founder & CEO at Cloudflare. “With Cloudflare’s Firewall for AI, we are helping build security into the AI landscape from the start. We will provide one of the first-ever shields for AI models that will allow businesses to take advantage of the opportunity that the technology unlocks, while ensuring they are protected.”
With Cloudflare’s Firewall for AI, security teams will be able to protect their LLM applications from the potential vulnerabilities that can be weaponized against AI models. Cloudflare will help enable customers to:
- Rapidly detect new threats: Firewall for AI may be deployed in front of any LLM running on Cloudflare’s Workers AI. By scanning and evaluating prompts submitted by a user, it will better identify attempts to exploit a model and extract data.
- Automatically block threats – with no human intervention needed: Built on top of Cloudflare's global network, Firewall for AI will be deployed close to the end user, providing unprecedented ability to protect models from abuse almost immediately.
- Implement security by default, for free: Any customer running an LLM on Cloudflare’s Workers AI can be safeguarded by Firewall for AI for free, helping to prevent growing concerns like prompt injection and data leakage.
According to Gartner, "You cannot secure a GenAI application in isolation. Always start with a solid foundation of cloud security, data security and application security, before planning and deploying GenAI-specific security controls." Cloudflare Firewall for AI will add additional layers to its existing comprehensive security platform, ultimately plugging the threats posed by emerging technology.