There is a particular kind of organisational self-deception that happens when a new technology arrives faster than the governance function can absorb it. The instinct is to treat the technology as the problem. Build a wall. Slow it down. Wait for the policy team to catch up. What actually happens is that the wall gets walked around, the slowdown creates shadow IT, and the policy team produces a framework that is already obsolete by the time it lands on the intranet.
That is where most organisations are right now with AI agents.
Ryan Jones, Partner Director of Product Management for Power Apps, put it plainly in a recent piece on the Power Platform blog: most organisations are not struggling to adopt agents because the technology is unsafe. They are struggling because their governance models were built for a world that no longer exists. That framing is worth sitting with, because it shifts the problem from the technology to the organisation. The agents are not the issue. The governance architecture is.
The Old Model Was Built on a Boundary That Has Dissolved
Traditional security and governance thinking rested on a reasonably stable distinction between inside and outside. You knew where your data lived, you knew what your systems did, and you could draw a perimeter around the things that mattered. Governance was largely a function of controlling who crossed that perimeter and how.
Agents do not respect that model. They move fluidly across applications, data sources, and workflows. A single agent in Copilot Studio might query a SharePoint library, call a Dataverse table, invoke a Power Automate flow, and surface a result in Teams, all within a single user interaction. The "perimeter" in that scenario is not a boundary you can govern with a policy document and a weekly review meeting.
The pace is the other half of the problem. When building an agent can take minutes, a governance process built around week-long manual reviews is not a safeguard. It is a delay with a bypass already baked in.
Governance That Works Has to Be Opinionated About Risk
The most useful insight in Jones's framework is the move away from binary governance (allowed or not allowed) toward a risk-based model with graduated zones. A personal productivity agent with narrow data access is not the same thing as an agent running inside a credit approval workflow or a regulated reporting process. Treating them identically either stalls the first unnecessarily or under-protects the second catastrophically.
The graduated zone model is not a new concept in security architecture. We have applied it to network segmentation, to identity, to data classification. What is new is applying it deliberately and systematically to AI agents and the workflows they operate within. Low-risk scenarios get self-serve with guardrails. Medium-risk triggers review without killing momentum. High-risk requires deliberate design from the start, not as a blocker but as a condition of operating in that zone.
This is the kind of thinking that has to be embedded at the platform layer. If governance only exists in policy documents, it will be ignored. If it is enforced by the platform, it becomes the path of least resistance rather than an obstacle to it.
Agents Expose the Problems You Already Have
One of the sharper points in the article deserves more attention than it typically gets. Agents, by default, operate as the calling user. They do not acquire new permissions. Which means that if your identity and access management is a mess, your agents will inherit that mess at speed and at scale.
This is not an AI problem. It is an identity hygiene problem that AI has made visible. Organisations that have drifted into a state of accumulated over-permissioning, where users have access to far more than they need because provisioning was easier than deprovisioning, will find that agents execute that excess access with uncomfortable efficiency.
The governance conversation around AI agents cannot be separated from the maturity conversation around identity. If your zero trust posture is aspirational rather than operational, agents will expose that gap before you are ready to address it.
The Deeper Problem Nobody Is Quite Ready to Admit
Here is the uncomfortable truth sitting beneath all of this: even organisations that want to govern AI agents well often lack the frameworks, the tooling, and frankly the conceptual vocabulary to do it properly. Adaptive governance is a reasonable aspiration. The practical reality is that most enterprises are assembling something from parts that were not designed to work together, applying controls built for static software to systems that reason, infer, and act.
Platform-level enforcement, as Power Platform's managed environments demonstrate, is one piece of the answer. But it is one piece. The broader question of how you govern AI behaviour at the level of intent and decision, not just access and sharing, is one the industry is still working out. The tooling does not fully exist yet. The standards are nascent. The thinking, in most organisations, has not caught up with the deployment.
That is not an argument for slowing down. It is an argument for intellectual honesty about where the gaps are, so that the governance structures being built today are designed to evolve rather than calcify.
The Practical Path Forward
There is a temptation, when the governance conversation gets complicated, to reach for the most restrictive position as a default. Lock it down. Approve everything centrally. Build the committee. That is not governance. It is the appearance of governance with shadow IT as the inevitable outcome.
Real governance creates promotion paths. It lets teams experiment in constrained environments and provides a clear, supported route to move something into production when the risk profile justifies it. The alternative is that innovation happens anyway, just outside your line of sight.
For organisations thinking about where to start: classify your agents by risk, enforce those classifications through the platform rather than through process, and treat identity hygiene as a prerequisite rather than a parallel workstream. Accept that the governance model you build today will need to be rebuilt as the technology and the thinking matures. That is not a weakness in the plan. It is the plan.
Source: https://www.microsoft.com/en-us/power-platform/blog/2026/04/01/building-trustworthy-ai-a-practical-framework-for-adaptive-governance/




