In my last post (https://is.gd/kCldYQ), I left you with a question. Pick one AI system your organisation has deployed — a real one, running today — and try to write down, precisely and in a form you'd hand to a regulator, what it's authorised to do.
If you found that uncomfortable, you're not alone. But here's the question that matters more: why is it uncomfortable? You have tools. You have documentation. You have architecture diagrams, a CMDB, probably a risk register of some kind. Why can't you just pull the answer out of those?
The honest answer is that none of those tools were built to hold this kind of information. Not because they're bad tools (most of them are very good at what they were designed to do) but because what they were designed to do is fundamentally different from what you need now.
Let me take them in turn.
The diagramming tool
Visio. Miro, draw.io, Lucidchart. Take your pick. These are the first place most organisations reach when they need to document an AI system, and they produce something that looks right: boxes, arrows, swimlanes, labels. You can see the system. You can present it in a meeting. It communicates the shape of what was built.
What it doesn't do is hold meaning. A box labelled "Document Processing Agent" is a shape with a label. It doesn't know what that agent is permitted to do. It doesn't know whether the agent's output drives action automatically or whether a human has to approve it first. It doesn't know what data the agent can access, what the authority boundary looks like, or what happens when something falls outside the expected parameters.
The diagram is a picture of the system. A picture is not a governance record. It's also almost certainly out of date, because diagrams drift from reality quickly and there's no mechanism to keep them honest.
The CMDB
Your Configuration Management Database knows what exists. It can tell you there's a service running in a particular environment, what version it is, what it depends on, and who owns it. That's genuinely useful information.
But a CMDB is a registry of assets and configurations, not a record of intent. It doesn't hold what a system is permitted to do. It doesn't model authority. It has no concept of assertion levels, or whether human oversight is required, or where the boundary sits between autonomous AI decision making and a process that requires human approval before anything happens. The CMDB knows the system exists. It doesn't know what the system is allowed to do with that existence.
The architecture repository
This one is closer. A mature architecture practice (one using something like Archimate, or a proper enterprise architecture tool) will capture components, relationships, and sometimes constraints. You can model a system at multiple levels of abstraction and show how it fits into the broader landscape.
But even here, the problem is that components are treated as nodes in a graph, not as typed objects with governance properties. An AI agent and a legacy batch process look the same at the architecture layer: they're both boxes connected by lines. There's no native concept of assertion level, no way to express what an AI component is authorised to do versus what it merely can do technically, no model of authority boundaries.
You can annotate your way around this, building bespoke conventions on top of the tooling. Some organisations do. But you're fighting the data model rather than working with it, and anything you build that way is fragile and not queryable in any meaningful sense.
The risk register
The risk register knows what could go wrong. It captures risk descriptions, likelihood scores, impact ratings, mitigations, and owners. For a governed AI system, this is obviously relevant: the risk that an agent acts outside its intended authority is exactly the kind of thing a risk register exists to hold.
The problem is disconnection. The risk register captures the abstract risk, but it isn't linked to the system structure. It can't tell you which specific component creates the exposure, what authority boundary it crosses, or whether the mitigation (typically described as a paragraph of text) is actually implemented. You end up with a risk entry that says "AI system may act outside intended parameters" with a mitigation that says "architecture review completed" and no verifiable link between the two.
The gap
Each of these tools captures something real and useful. The problem isn't any one of them. The problem is that the information you actually need (what each AI component is, what it's permitted to do, where the authority boundaries sit, what human oversight looks like and under what conditions) doesn't live cleanly in any of them.
It lives in the gaps. In the architect's head. In an email thread from the original design phase. In an incomplete risk assessment done under time pressure before the launch date. In a Confluence page that was accurate once and hasn't been touched since.
That's the documentation debt. And it's not just an inconvenience. It's a liability, because the questions coming from outside the organisation (from regulators and auditors and increasingly from customers) require answers that none of your existing tools can produce in a form that actually holds up.
What would actually work
What's needed isn't another layer of documentation on top of the existing stack. It's a different kind of artefact, one built around the specific properties that matter for AI governance. Typed components, not shapes. Explicit authority models, not inferred from context. Queryable records, not static diagrams. Something where governance isn't a separate step you do after the design, but a by-product of the design process itself.
That's not a small ask. But it's a precise one. And precision is useful, because it tells you exactly why patching your existing tools together won't get you there.
In my next post I want to talk about what that artefact actually looks like: what the properties are, how the model works, and why the diagram is still the right interface even when the output is something much more structured.




