Securiti AI launches context-aware LLM firewalls to secure genAI applications
SAN JOSE, Calif., April 30, 2024 – Securiti today announced a new category of LLM Firewalls, the Securiti LLM Firewall, purpose built to protect GenAI systems & applications and the associated enterprise data and AI models. Radically different from the traditional firewalls, these distributed LLM firewalls are designed to understand a variety of languages, user prompts, and multimedia content, and provide protections against adversarial attacks and potential exposure of sensitive data.
Modern applications will be conversational, based on multilingual user prompts and AI responses, combined with multimedia content. All such multilingual conversations and multimedia content need to be inspected in-line to detect external attacks, malicious use, and user mistakes. Also, such LLM firewalls need to be smarter to understand the context of the data associated with such GenAI applications to limit the scope of engagement.
Traditional network and application firewalls are not designed for it, leaving the door open for cyberattacks on GenAI applications, such as highlighted by OWASP Top 10 for LLMs. Securiti LLM Firewalls are a new category of distributed firewalls, designed to protect GenAI systems & applications.
Securiti LLM Firewalls are part of the overall AI Security and Governance solution, announced earlier in the year; see AI Security & Governance Tutorial.
“GenAI is rapidly ushering in a new wave of innovation, but this technology also poses serious privacy and security risks that need to be addressed,” said Ryan O’Leary, Research Director, Privacy and Legal Technology at IDC. “Securiti’s new LLM firewall represents a new class of protections that are needed to safeguard GenAI and ensure organizations are not compromising privacy and security along the way.”
Broad adoption of GenAI into business processes is contingent on enterprises finding solutions to safely adopt the technology, mitigating privacy and security threats that can result in loss of trust, legal repercussions and reputational damage. New types of threats such as prompt injections, data poisoning, and data exfiltration require a new form of protection.
Thwarting New Threats & Attack Vectors:
The conversational nature of GenAI has opened the door for brand new types of threats and attack vectors and Securiti LLM Firewalls are designed to protect against it. Internal or public facing Prompts Interfaces are a new pathway to enterprise data. Securiti LLM Firewalls detect and protect against:
- Prompt injection attacks (OWASP LLM01)
- Insecure output handling (OWASP LLM02)
- Sensitive data disclosure (OWASP LLM06)
- Training data poisoning (OWASP LLM03)
- Jailbreak attacks
- Offensive content and abusive language
- Authentication phishing attacks and much more.
Securiti LLM Firewalls can detect and stop such attacks in-line and in real time. Combined with other capabilities within the Securiti Data+AI Command Center, they cover most aspects of the OWASP Top 10 for LLMs.
"AI will be transformative for businesses like ours, but before it can be fully embraced we need proper safeguards and controls to mitigate risks. Securiti’s new LLM firewalls along with their unique expertise in managing sensitive data at scale are critical to enabling organizations like ours to harness the power of AI,” shared Craig VanHuss, Director of IT, Infrastructure, Data, & Architecture at KVAT Food Stores Inc.
Power of Proximity & Context-Awareness:
Protecting GenAI systems & applications requires more than a traditional perimeter-based firewall approach. The inspections and controls need to be embedded at various stages of the overall system. For instance, to protect retrievals from Vector DBs within a GenAI system, a retrieval firewall needs to be inserted in-line next to Vector DBs to monitor and control all retrieval attempts from it. Similarly, all internal user prompts, even for internal GenAI applications, need to be monitored and secured.
“Enterprise organizations we work with are eager to take advantage of GenAI to create business value,” said Daniel Kendzior, Global Data & AI Security Practice Lead at Accenture. “Securiti’s new LLM Firewall delivers critical infrastructure to help these organizations adopt GenAI safely, mitigating privacy and security threats while accelerating innovation.”
In addition, Securiti Data Command Graph provides the necessary context for enterprise controls and data related to the GenAI systems & applications that need protection. This context enables the Securiti LLM Firewalls powerfully tuned for the GenAI use cases. Furthermore, the enterprise controls and policies used across the enterprise within Securiti Data+AI Command Center are readily made available, including:
- Enterprise definition and classification of sensitive information
- User Data Entitlements
- Internal data policies and data controls, such as masking rules
- Applicable regulations
- Compliance requirements
“Our mission is to enable organizations to unleash the power of their data safely with GenAI,” said Rehan Jalil, CEO of Securiti AI. “This new category of LLM firewalls for the GenAI apps are playing a critical role in providing the security for GenAI’s mainstream use cases in the enterprise”
Enabling AI Compliance:
Securiti LLM Firewalls are a fundamental ingredient for establishing compliance with major AI regulations, such as the EU AI Act and the NIST AI Risk Management Framework. They also provide key components of a com0prehensive AI Trust, Risk, and Security Management (TRiSM) program. (See TRiSM Tutorial).
Securiti LLM Firewalls, combined with other capabilities within the Data+AI Command Center, provide automations for compliance with regulations like the EU AI Act and the NIST AI RMF.
- Securiti LLM Firewalls will be showcased at RSA, at booth #3305.
- Securiti LLM Firewall (website)
- The CISO Guide for Securing GenAI Applications (white paper)
- AI Security & Governance Certification
- Request Demos
- Request Trial Access
- Request 1-1 meeting at RSA