On the frontlines of protecting AI

June 21, 2024
The ways of defending AI systems and processes and assessing the effects of attacks is different than typical cyber targets because AI/ML apps and models are dynamic and always changing based on learning.

Earlier this month, Protect AI released its June AI/ML vulnerabilities report, which contains 32 vulnerabilities -- including some critical vulnerabilities found in the NVIDIA Triton Inference Server and the Intel Neural Compressor.

NVIDIA Triton Inference Server is an open-source software tool that helps standardize the deployment and execution of AI models. It's part of the NVIDIA AI platform and is available with NVIDIA AI Enterprise. 

Intel Neural Compressor is a tool that optimizes deep learning models to achieve better performance and efficiency without sacrificing accuracy. 

Protect AI noted these supply chain vulnerabilities put AI/ML apps and models at risk, and allow attackers to inject arbitrary log entries, potentially hiding malicious activities or misleading investigations. They were discovered by the Protect AI huntr bug bounty community of security researchers.

It may not be the first time an AI systems have been rendered vulnerable and it certainly won’t the last, says Marcello Salvati, Senior Engineer and Researcher at Protect AI. The primary motives for AI/ML attacks are PII leakages, data manipulation, model poisoning, infrastructure attacks, and reputational injuries. These can result in fraud, data theft, disruption and more.

NVIDIA AI tools are generally considered the “gold standard” in terms of AI tooling mostly because their tools are highly optimized for their AI chips which are currently industry leading, Salvati says. Intel tools are also popular, “but I would say probably not as much as NVIDIA. I would say both companies' tools are extremely relied upon in the AI industry.”

From the infrastructure and supply chain perspective of AI (where most of the attack surface may be), “there really is no technical difference between the methods used in traditional cyberattacks and attacks on AI systems,” he says. “The only thing that’s changed is the attackers are now learning more about how to attack these types of systems, which consequently increases the amount of breaches,” Salvati says.

One example he shares is the ShadowRay campaign, which is also the first publicly known attack specifically targeting AI systems. As attackers start to gain awareness and a more technical understanding of how to, and the value of compromising AI systems, the number of breaches correlated to AI systems will increase, Salvati says.

The ways of defending AI systems and processes and assessing the effects of attacks is different than typical cyber targets because AI/ML apps and models are dynamic and always changing based on learning, Salvati notes.

“They are also developed using a large open-source ecosystem of libraries, packages, frameworks, foundational models and third-party data sets. Traditional security tools, which lack visibility into the complex and dynamic nature of ML systems and data workflows, are not sufficient to address the new vulnerabilities and threats targeting AI/ML applications and systems.”

Salvati believes most organizations have neither the skills nor the resources to detect threats and vulnerabilities in their AI/ML supply chain or ascertain and manage their AI/ML inventories.

It will be certainly interesting to watch how the security industry analyzes and responds to attacks on AI and ML systems as the world grows more dependent on the capabilities of these systems. CLICK HERE to read the June vulnerability report.

About the Author

John Dobberstein | Managing Editor/SecurityInfoWatch.com

John Dobberstein is managing editor of SecurityInfoWatch.com and oversees all content creation for the website. Dobberstein continues a 34-year decorated journalism career that has included stops at a variety of newspapers and B2B magazines. He most recently served as senior editor for the Endeavor Business Media magazine Utility Products.