During the most recent podcast recording I made an offhand remark about feeling surrounded by agents. It was not a planned insight. It was just a moment of noticing how routine the whole thing has become. Agents now appear everywhere without invitation. They sit inside Microsoft 365. They assist in GitHub. They pop up in Slack and Salesforce. They weave themselves into cloud consoles and security dashboards. They do not announce themselves. They quietly become part of how work happens.
That is the part we should pay attention to. Most organisations have not adopted agents with intentional design. They have drifted into using them. A Copilot gets enabled because someone is curious. An automation tool arrives through a SaaS update. A developer tries a new assistant because it speeds up a task. None of these decisions are wrong. The issue is that nobody steps back to look at the combined effect. The result is an ecosystem full of automated actors that now influence the way systems behave without any real architectural thought behind them.
The ease of building with agents reinforces this drift. You can take documentation, feed it to an assistant and receive a functioning proof of concept within an afternoon. The speed is impressive and it is easy to see the appeal. The danger lies in the fact that you can now produce something usable before you actually understand its moving parts. That loss of friction means we skip the learning that once forced us to understand the shape of a problem before automating it.
This is where agents create the most risk. They do not understand intent. They do not understand boundaries. They do not understand organisational nuance. They follow instructions literally. If your identity environment is inconsistent they inherit that inconsistency. If your data is fragmented they reflect the fragmentation exactly as it exists. If your governance relies on human awareness they work around it because they cannot rely on intuition.
This is why agents reveal more than they fix. They surface whatever foundation they are placed upon. A coherent organisation becomes more efficient. A disjointed one becomes uncomfortably visible. Agents accelerate conditions that already existed. They do not create new ones.
There are moments where it makes sense to be explicit and use a little structure. One of those moments is distinguishing between the type of autonomy we talk about and the type we can actually trust in practice. Autonomy sounds exciting until you realise it is simply execution without constraints. What we need is something more controlled.
Autonomy needs boundaries, and those boundaries are not optional.
For example:
- Agents must be treated as proper identities: They require least privilege, clear scopes and regular review in the same way any human account would.
- They need well defined limits: An agent with no boundaries will wander into areas you never intended simply because your prompt was interpreted literally.
- Their activity must be visible and traceable: If you cannot see what an agent did and when it did it, you cannot trust the system it operates inside.
- Human involvement still matters for high impact actions: The existence of automation does not remove accountability. It changes the shape of it.
These points are not administrative chores. They are the basic requirements for running a safe environment where automation does not outrun comprehension.
Data continues to be the other major source of difficulty. The first mile of data work is still the most problematic. This is the place where inconsistent schemas, partial lineage and informal definitions live. Agents will not correct any of this. They will simply act on the information they receive. When the data is reliable the agent behaves predictably. When the data carries unspoken assumptions the agent repeats those assumptions with great confidence. The result is automation that looks intelligent but carries forward flaws that nobody has addressed.
Designing for an agent enabled organisation requires a shift toward clarity. You cannot add safety later. You must decide which types of agents you will allow. You must intentionally manage their identity lifecycle. You must set boundaries that limit where they can act. You must be able to observe their decisions. And you must encourage a culture that writes down the rules rather than assuming that everyone knows them.
Agents make everything visible. They highlight every short cut and every unwritten rule that works only because a human understands the context. They surface all the invisible glue that holds processes together. This is why they feel both powerful and unsettling. They do not break your organisation. They show you exactly where the cracks already were.
So the conclusion remains the same. Agents are not your strategy. They are not your differentiator. They are your newest and most demanding dependency. They depend on the quality of your architecture, the clarity of your identity model, the discipline of your data management and your willingness to state the rules that were once left unspoken.
🎙️ Want the conversational version?
If you’d like to hear how this theme first emerged — including some rants, a few laughs, and a couple of cautionary tales — check out the latest episode of Cloudy with a Chance of Insights Spotify | YouTube | Apple Podcasts
Leave a comment