AI has entered its next phase. For years, most AI systems informed decisions; now, a new class of systems—AI agents—make decisions on our behalf. This shift creates opportunity and introduces new forms of risk. Agents do not just produce outputs; they take action. That distinction expands the attack surface beyond a model’s parameters to how systems interact, how authority is delegated, and how decisions are executed in real time
This is not theoretical. It is a structural change in how modern systems operate. Build-time rigor, a core tenet of good security practice, still matters. But for autonomous and semi-autonomous agents, controls must also operate at the point of execution and produce tamper‑evident, complete runtime records to support audits, investigations and incident response.
At JPMorganChase, operating at global scale has taught us a simple lesson: align safeguards to capability and risk. Confined, read‑only agents merit lighter guardrails. Agents that combine the so-called “lethal trifecta”—processing untrusted inputs, accessing sensitive data and authority to act externally—require robust, continuous enforcement and oversight. The potential blast radius increases when these conditions compound, and the safeguards must scale accordingly.
This requires a different emphasis:
- Clear authorization boundaries around what these systems are allowed to do.
- Constraints on how they interact with other services and environments.
- Auditable records of actions taken across automated workflows.
The challenge is no longer securing models. It is securing agents operating in dynamic, interconnected environments.
Why this matters now
The broader technology ecosystem is already highly interconnected. Organizations rely on cloud platforms, APIs, and third-party services to operate at scale. That interconnectedness has created efficiency but also concentrated risk. We’ve seen how failures in shared infrastructure and supply chains can have systemic impact. AI agents build directly on those integration patterns, and in many cases extend them by introducing automated decision-making and execution.
At the same time, deployment on these capabilities are accelerating. As with prior technology shifts, there is a risk that capability advances faster than security practices needed to support it.
Addressing this requires moving beyond static controls and towards continuous, runtime governance of agent behavior. Controls must operate at the point of execution, producing evidence of constraint, accountability and containment, so that security is a primary objective rather than a suggestion.
Where we see challenges
As AI agents become more capable, the associated risks can be found in common, yet critical domains. In practice, we see these challenges emerging across the following areas:
Software
AI agents increasingly rely on orchestration layers, tools and external integrations perform tasks. That creates a broader and more dynamic attack surface than traditional applications.
Risks emerge from how agents sequence actions, invoke tools and interact with external services. Even when individual actions are authorized, the way they are combined can produce unintended outcomes.
Securing this layer requires stronger controls around execution, integration and runtime behavior, not just the underlying model.
Identity and Authorization
AI agents operate using delegated authority. They access services, call APIs and perform actions on behalf of users or other systems. That makes identity foundational.
Organizations need clear boundaries around what agents are allowed to do, how authority is granted and how actions are tied back to accountable identities. As agents interact with each other, establishing trust between those systems becomes just as important as traditional user authentication.
Without strong identity and authorization controls, a single compromised agent can propagate risk across environments.
Data
AI agents blur the line between data and instructions. Inputs can influence behavior, not just outputs.
This creates new risks around how data is ingested, interpreted and reused, including the potential for manipulation through external content, persistent context or system memory.
At the same time, as agents interact across environments, maintaining clarity on where data comes from, how it's used and how it influences actions becomes more difficult.
To address these concerns, data needs to be “policy-aware” at runtime. This requires data to travel with labels that describe what it is, how sensitive it is and what it’s allowed to be used for. Those rules need to be enforced wherever the data travels, producing an auditable trail. Vendors and partners need to be held to the same standard with visibility into how data is being used and the ability to remove it from training if needed. This approach maintains operational controls aligned with business and customer obligations across the full lifecycle.
The practical path forward
Organizations need more than high-level principles. They need to see how those principles apply in real-world, commercially available technology and systems.
Reference architectures and implementation examples can demonstrate how to:
- Manage agent-to-system and agent-to-agent interactions.
- Enforce authorization when agents act across services.
- Monitor behavior across automated workflows.
- And maintain control in environments where agents operate continuously.
Cybersecurity frameworks already define the outcomes organizations should achieve, such as identity management, access control, monitoring and resilience. What is changing is how those outcomes must be implemented.
When actions are initiated by agents rather than directly by people organizations need to ensure:
- Machine-to-machine interactions are properly authenticated and authorized.
- Actions taken by agents are traceable and auditable.
- Higher risk operations are subject to stronger safeguards.
- And agents can be constrained or stopped when behavior deviates from expectations.
These are not new objectives. They are new operating conditions for established controls.
Looking ahead
AI agents continue to evolve, and systems that can act across environments will become more common.
Security must evolve alongside that shift. This is not about slowing progress. It is about ensuring that progress is built on the foundations that can scale securely – across firms, sectors and the global economic system.
The path forward is clear: build on what works, focus on practical implementation and address challenges early together.