Lessons from Skynet, or why your AI identities need governance and least privilege
When ChatGPT hit the public in late 2022, it triggered a spate of references to Skynet. While it’s still a far cry from the dystopian surveillance neural network in the movie Terminator, it does pose unique security challenges that need to be grappled with. Skynet had complete control over all U.S. ballistic systems without adequate security and permission controls, enabling it to start a nuclear war in an effort to kill off mankind.
As AI entities become more autonomous and gain access to more sensitive data and systems, CISOs stand to face their own cybersecurity crisis--today’s security technology and practices aren’t equipped to deal with AI. It won’t be Armageddon, but no company wants AI usage leading to leaks of their most prized assets.
Not human, but more than machine
Chatbots and other AI entities are hard to classify with regard to infosecurity. They are nonhuman identities that are largely responsible for guarding their keys to the enterprise, aka passwords. Nor are they akin to machine identities, which are the software, devices, virtual machines, APIs, and bots that operate within a network. They are inherently different from the other two identity types and need similar security control in a way that takes into account their unique attributes.
AI is a fusion of human-guided learning and machine autonomy. It can take a lot of unstructured data and structure it for human consumption. AI models are designed to hand out information without question--it’s their sole purpose. It sits in the middle between humans, who know how to guard their passwords (at least theoretically), and machines, which store passwords that can be compromised and stolen. AI has more autonomy than machines, but less than humans. It needs access to other systems to do its job, but lacks the reasoning to know when to apply limits.
Investment outpaces security
Enterprise spending on AI is taking off, with increasing investments in AI servers and apps as well as renting infrastructure to train large language models (LLMs) from cloud providers. One recent survey found that enterprises expect to spend an average of $11.8 million on AI this year, and Accenture pegged its 3-year spend on data and AI practices at $3 billion last year.
The rush to invest in AI eclipses the security efforts. The average security practitioner is challenged in dealing with their existing duties and is not investing time to secure AI workloads. Today’s security solutions like access controls and least privilege rules aren’t an easy port over to AI systems, either. Even with machine identities, companies don’t always understand the risks they pose or follow best practices for security.
In fact, machine identities are often overlooked. CyberArk’s 2024 Identity Security Threat Landscape Report found that 68% of survey respondents said that up to half of their machine identities are accessing sensitive data while only 38% of organizations take into account machine identities that have access to sensitive data as part of their definition of privileged users.
Data leaks and cloud compromises
While AI’s security risks aren’t unique, their scope and scale could be. Persistently loaded with fresh training data from throughout an organization, LLMs are the next high-value target for attackers as soon as businesses build them. Because they can’t be trained with test data, the data is up-to-date and potentially revealing of intellectual property, financial secrets, and other highly sensitive information. AI systems are built to trust, which puts them at significant risk of being conned into providing information they shouldn’t be giving out.
Cloud attacks on AI systems enable lateral movement and jailbreaking that can be used to trick systems into providing false information to the public. Identity and account compromises in the cloud are common, with a number of recent attacks from stolen credentials leading to untold damage for some of the biggest brands in the tech, banking, and consumer space.
AI could also be used in attacks. For instance, it could allow attackers to evaluate every single permission that’s aligned to a specific role to be able to easily move through an organization.
So where does that leave us? The use of AI and LLMs in organizations is so new that it will take time for security best practices to be established. In the meantime, CISOs can’t sit back and wait; they need to start devising strategies for protecting AI identities before they are forced to, either by a cyberattack or regulation. AI is covered by compliance standards now, regardless of where the data is stored. There’s already been a GDPR complaint filed in Europe against an AI company, OpenAI, claiming that ChatGPT responses had been providing false consumer data.
AI security culture
While there is no silver bullet security solution for AI, there are things organizations can do to address some of these issues. Here are some steps that will help CISOs improve their AI identity security posture as the market matures.
- Look for overlap: Look for areas where existing security practices and policies can provide value with AI. Leverage existing controls like access and least privilege where possible.
- Secure the environment: Understand and protect the environment where the AI will live. It’s not necessary to buy an AI security platform; you just need to secure the environment where the AI activity is happening.
- Create an AI security culture: Foster an AI security mindset. Bring security representatives in the AI think tank and skunkworks efforts. Enlist the support of security team members who can leverage resources and skills for risk reduction. This cultural shift involves thinking about how data is processed and the LLM is being trained.
The AI at the heart of Skynet is vastly different from what’s helping us write, build code and use data to improve our business operations today, but there is an important security lesson at the heart of that story that is applicable to current generative AI and LLMs. We can’t let AI entities slip through the identity cracks because it’s neither human nor machine and plays by different rules. Start today on your AI security planning with the resources you have and avoid AI identity security oversight.