AI Agents Are the New Workload: What That Means for Security

Autonomous agents are in production augmenting human teams, optimizing processes, and unlocking new efficiencies. But it’s a shift that has come with an alarming new risk.
Oct. 27, 2025
8 min read

Key Highlights

  • Autonomous agents now make real decisions and actions across systems, yet most organizations still don’t manage them as distinct identities.

  • AI agents’ dynamic, unpredictable behavior demands updated discovery, threat modeling, and testing beyond traditional frameworks.

  • Extending PKI and using protocols like MCP ensures agents are authenticated, authorized, and operating within trusted boundaries.

It wasn’t all that long ago that AI was driving “what if” discussions about the future in boardrooms as the next big thing in tech. Reality has quickly caught up to the hype, and now AI is embedded into workflows and taking on operational responsibilities for companies that were only imagining use cases a year ago.

As generative AI becomes a more meaningful and concrete part of enterprise strategies, teams are increasingly building AI-powered tools to automate complex processes. This transformation has introduced a new class of non-human actors into corporate networks: AI agents. Unlike traditional applications that simply process data, these agents make decisions, take actions, and interface with critical systems autonomously.

These AI agents aren’t like the chat-based assistants that have become synonymous with GenAI. Those applications respond to input, but these AI agents are autonomous: they perceive, plan, and act on goals with minimal human oversight. They may not be human, but they are making decisions, executing tasks, and interfacing with critical systems just like any flesh-and-blood employee might.

As such, AI agents have emerged as a new type of workload identity. And yet, in the vast majority of organizations, they are not being treated that way. That’s got to change—and soon.

AI Agents as a new class of workload identity

In cybersecurity, the term identity refers to a set of attributes that uniquely identifies an entity: the virtual representation of who (or what) is accessing a resource. Often an identity is a person. Just as often, it is not.

That’s where workload identities come in: they’re credentials and access policies assigned to non-human actors (apps, containers, services) to authenticate and interact securely across systems. Up until very recently, these workloads were easy to predict, with lifecycles and actions governed entirely by development pipelines and operational routines.

AI agents are different. Unlike their predecessors, AI agents ingest data and reason, orchestrate actions, and even collaborate with other agents. From a system perspective, they operate more like humans than machines, interacting with APIs, databases, cloud services, and internal tools using persistent credentials.

This introduces two pressing challenges. First, many AI agents today are operating without clear registration, tracking, or access governance. Second, security teams may not even be aware of their presence.

The implication is clear: organizations need to extend workload identity management principles to AI agents. That means authenticating agents with unique credentials, logging their actions for accountability, and applying lifecycle controls just as they would for any other workload.

The default security frameworks in place today are designed for either human users or traditional applications, both of which behave in relatively predictable ways. AI agents don’t follow the same rules. They introduce dynamic (and often unpredictable) behavior. Their autonomy means they can be influenced by new inputs, by changes in their environment, or by unexpected interdependencies. Any time unpredictability is introduced into the equation, it opens the door to novel vulnerabilities.

Let’s consider an AI agent designed to streamline the automated generation and deployment of TLS/SSL certificates to microservices as they spin up. If that agent is operating without a defined identity or lifecycle policy, it might request certificates for unauthorized workloads, issue certificates with insecure parameters or no expiration, or leave certificates neglected and vulnerable when workloads are decommissioned.

Even a well-intentioned agent could unintentionally flood a network with unmanaged, unused certificates, which creates a sprawl that makes it harder for security teams to track, rotate, and revoke credentials. Without proper workload identity governance and certificate lifecycle automation, organizations could face outages, trust failures, or exposure to man-in-the-middle attacks.

Any time unpredictability is introduced into the equation, it opens the door to novel vulnerabilities.

Best Practices for Securing AI Agents

Well-meaning agents being manipulated into malicious activity is a new challenge for enterprise security. The good news is that this new landscape doesn’t require abandoning existing security principles. But it does require evolving them. That has to start with awareness.

Most organizations simply do not have visibility into what AI agents are running in their environments. The typical enterprise environment will have some agents approved by IT and others spun up independently, but teams will be experimenting with automation. You can’t protect what you don’t know is there, so discovery is essential.

By inspecting traffic patterns, monitoring API calls, and scanning code repositories, security teams can begin to identify where AI agents are active, what they’re connected to, and what level of autonomy they possess.

The next step, once agents have been identified, is for organizations to integrate them into their threat modeling exercises. Unfortunately, traditional models won’t effectively account for AI’s probabilistic behavior, so threat modeling for agents must take into account new factors such as memory manipulation, orchestration between tools, and prompt-based attack vectors. Security teams must also assess the level of “agency” each agent professes. The more independently an agent operates, the higher the risk—and the greater the need for security controls.

Security teams will also need to evolve the way they conduct security testing. AI agents operate across a variety of systems, relying on both structured and unstructured data. That means testing must expand to account for a broader surface area across the enterprise. It’s not enough to test LLMs: you must also test the tools and APIs agents use to take action.

It’s not enough to test LLMs: you must also test the tools and APIs agents use to take action.

Penetration testing and red teaming remain critical, but that too must expand to include AI-powered agents as part of the process to test systems. Essentially, companies will be employing autonomous agents as both builders and checkers. And as security teams begin to wrap their hands around the full picture of what it means to secure AI agents, it will be key for them to understand that it’s not just about preventing malicious behavior: it’s about understanding how agents behave, testing their boundaries, and ensuring that their actions are aligned with security protocols.

Taken together, these steps point to four core capabilities organizations need to secure AI agents effectively:

  • Strong identities

  • Fine-grained access controls

  • High degree of auditability—so SIEMs and other detection tools can spot anomalous behavior

  • Rapid access revocation—in case an AI agent deviates from expected behavior

By embedding these principles into their identity-first security strategies, organizations can begin to manage AI agents with the same rigor as other high-risk workloads.

The Role of PKI in Establishing Trust

PKI allows organizations to answer the most fundamental questions in agent security: Is this entity who it claims to be, and can we trust it to do what it says?

I’ve talked about the importance of existing security principles in this new AI landscape, and public key infrastructure (PKI) is a prime example. PKI is the backbone for digital trust, serving as the foundational technology for securing communications between all machine identities. It’s critical that it extend into the realm of AI agents.

PKI enables secure online authentication and communication through digital certificates, which serve as digital equivalents of identification documents. When AI agents need to communicate with cloud services, access data warehouses, or interact with other applications, digital certificates provide the most reliable method for authentication and encryption. But the scale of AI agent deployments will create real challenges for most organizations trying to manage those certificates.

Think of the scale of the challenge: organizations may need to issue, manage, and rotate thousands of AI agents, each with different lifecycle requirements. Some agents may operate for short periods of time before being terminated, while others may require persistent identity over extended periods as they are integrated into new ongoing business processes. That level of diversity requires scalable certificate issuance, automated rotation capabilities, and efficient revocation processes.

The verification process is critical with thousands of AI agents let loose in the enterprise. Without PKI, an AI agent could impersonate another, intercept sensitive data, or act on behalf of an unauthorized system. With PKI, organizations can issue certificates to AI agents that prove their identity and secure their communications. This includes ephemeral certificates for short-lived agents and hardware-bound credentials for devices like drones or medical sensors.

In short, PKI allows organizations to answer the most fundamental questions in agent security: Is this entity who it claims to be, and can we trust it to do what it says? Without answers to those questions, you’re dead in the water.

While PKI provides the foundation for trust, it must be paired with the right mechanisms that allow agents to act safely on behalf of users. This is where the Model Context Protocol (MCP) comes in.

Agentic systems need tools to execute tasks, and MCP provides the secure interface that connects agents with enterprise applications. By allowing agents to use existing apps through a natural-language interface, MCP enables powerful new workflows while ensuring that those actions remain governed by authentication, authorization, and trust policies.

Used together, PKI and MCP provide both the proof of identity and the secure means of action that are essential for responsible AI adoption.

Preparing for an Agentic Future

We are firmly entrenched in the agentic AI era. It’s here: autonomous agents are in production augmenting human teams, optimizing processes, and unlocking new efficiencies. But it’s a shift that has come with an alarming new risk. Old paradigms and assumptions—like the idea that all workloads are predictable or all identities are human—are already out of date.

Security leaders who are not already starting to bring AI agents into the fold of identity-first security strategies are going to put their organizations at risk. That means recognizing them as workload identities, assigning them proper credentials, incorporating them into threat models, and validating their integrity through robust testing and lifecycle controls. The sooner teams embrace this shift, the better equipped they’ll be to secure the next generation of enterprise automation.

About the Author

Ellen Boehm

Ellen Boehm

SVP of IoT Strategy & Operations at Keyfactor

Ellen Boehm is the SVP of IoT Strategy & Operations at Keyfactor. Ellen leads the product strategy and go-to-market approach for the Keyfactor Control platform, focusing on digital identity security solutions for the IoT device manufacturer market. Ellen is passionate about IoT and helping customers establish strong security implementations for the lifecycle of their overall IoT systems. Ellen has over 15 years of experience leading new product development focusing on IoT and connected products in Lighting controls, Smart Cities, Connected buildings, and Smart Home technology. Ellen has previous leadership roles in Product & Engineering at General Electric and Sky Technologies.

Sign up for our eNewsletters
Get the latest news and updates

Voice Your Opinion!

To join the conversation, and become an exclusive member of Security Info Watch, create an account today!