AI agents are no longer a future consideration. They are in production, running with real identities, accessing real data, and taking actions across real workflows. That shift changes what security teams need to think about, and the recently published OWASP Top 10 for Agentic Applications (2026) gives the industry a practical framework for doing exactly that.
Microsoft published a post this week walking through the OWASP list and explaining how Microsoft Copilot Studio addresses each risk. If you are building or governing agents in a Microsoft environment, it is worth reading. This post pulls out the practical substance and adds some context for security and architecture teams trying to figure out where to start.
Why Agentic AI Is a Different Security Problem
The reason agentic AI demands its own security framework is not complexity for its own sake. It is because these systems collapse domains that security teams have historically managed separately. Application risk, identity risk, and data risk have traditionally sat in different columns with different owners. An agent operating autonomously across workflows touches all three simultaneously, and when something goes wrong, the failure does not stop at a single bad response. It can propagate as an automated sequence of access, execution, and downstream impact.
There is also a subtler problem. An agent can be working exactly as designed while still taking steps that a human would not approve, because the boundaries were set too loosely, permissions were broader than necessary, or tool use was not tightly governed. That gap between "working as intended" and "behaving safely" is where most agentic security incidents will emerge.
The Ten Failure Modes Worth Understanding
The OWASP Top 10 for Agentic Applications groups the primary risks across autonomous systems that act with real identities, data access, and tools. Reading them as a set, three practical clusters stand out.
The first is instruction and context manipulation. This covers agent goal hijack (ASI01), memory and context poisoning (ASI06), and human-agent trust exploitation (ASI09). If your agent retrieves content from external sources, processes user-supplied inputs, or maintains memory across sessions, you need clear controls over what the agent is willing to treat as instruction. Prompt injection sits in this cluster, and it remains one of the most practical attack vectors against deployed agents.
The second cluster is identity and privilege. This covers identity and privilege abuse (ASI03), insecure inter-agent communication (ASI07), and rogue agents (ASI10). As multi-agent architectures become more common, the question of how agents establish trust with each other, and how delegated credentials flow across that chain, becomes genuinely important. Security teams with a strong identity background will find these risks familiar, even if the surface area is new.
The third cluster is execution and supply chain. This covers tool misuse (ASI02), unexpected code execution (ASI05), agentic supply chain vulnerabilities (ASI04), and cascading failures (ASI08). This is where the blast radius gets serious. An agent that can invoke tools, execute code, or chain actions across systems has the potential to cause significant damage if those capabilities are not tightly constrained from the outset.
What Copilot Studio Provides in Practice
At the build stage, Copilot Studio constrains agents to predefined actions, connectors, and capabilities rather than allowing arbitrary code execution or open-ended tool invocation. That directly reduces the surface area for tool misuse (ASI02) and unexpected code execution (ASI05). Agents run in isolated environments and cannot modify their own logic without going through a formal republishing process, which addresses rogue agent risk (ASI10) at an architectural level rather than relying on the agent to police itself.
The ability to disable or restrict a deployed agent quickly also matters more than it might seem. Knowing you can pull an agent out of service without unravelling complex dependencies is a practical incident response capability that should be confirmed before deployment, not discovered during one.
On the governance side, Microsoft Agent 365 becomes generally available on 1 May. It is the centralised control plane for agent oversight, covering visibility into agent usage and behaviour, policy enforcement, access and identity controls to reduce privilege escalation risk (ASI03), data security controls to detect sensitive data exposure (ASI09), and threat protection to surface prompt injection (ASI01), tool misuse (ASI02), and compromised agents (ASI10).
The Questions to Ask Before You Deploy
What permissions does each agent actually need? Least privilege applies to agents in exactly the same way it applies to service accounts. Permissions granted for convenience become a liability when agent behaviour drifts or an agent is compromised.
How is the agent's identity established and governed? Microsoft Entra Agent ID is designed to give agents a consistent, governable identity, but it needs to be in scope for your identity governance processes rather than treated as an automatic safety net.
What does the agent store across sessions, and who can influence it? Persistent memory introduces a layer that can be corrupted or manipulated to skew future reasoning. Know what your agent retains, where it is stored, and what inputs can affect it.
Can you observe what the agent is doing in production? Audit logs and usage visibility are not optional for systems acting autonomously on real data. If you cannot see what an agent did and why, you cannot govern it effectively, and you cannot investigate when something goes wrong.
What is your response plan when an agent behaves unexpectedly? The ability to disable quickly, investigate what happened, and contain impact should be part of your deployment checklist. Treat it the same way you would treat incident response planning for any other privileged system.
The Underlying Principle Has Not Changed
The OWASP Top 10 for Agentic Applications is a solid framework, and Microsoft's mapping of it to Copilot Studio and Agent 365 is practically useful for organisations building on that platform. But the point that matters most applies universally: organisations that treat agents as privileged applications, with clear identities, scoped permissions, continuous oversight, and lifecycle governance, are better positioned to manage risk as agentic AI scales.
The technology is new. The security principles are not. Establish governance before agents are embedded in your workflows, because retrofitting controls after the fact is significantly harder than building them in from the start.
Reference: Addressing the OWASP Top 10 Risks in Agentic AI with Microsoft Copilot Studio, Microsoft Security Blog, 30 March 2026.




