A few days ago, and yes, I am technically still on holiday, I was standing at Foz do Iguaçu, watching an unreasonable amount of water throw itself over a cliff. It is loud, chaotic, and completely indifferent to whether you feel prepared for it. The mist gets everywhere, the ground is slippery, and the closer you get, the harder it is to pretend you are in control of anything.
Naturally, my first thought was AI governance. Which probably says more about me than it should.
The falls themselves are formed by a single river, the Iguaçu, breaking suddenly and spectacularly. But what makes the place even more interesting is what happens nearby. Just downstream, the Iguaçu River meets the Paraná River, right at the point where Brazil, Argentina, and Paraguay come together. Separate flows, different histories, converging and then accelerating together.
Standing there, it is hard not to think about what happens when independent systems collide. Which, increasingly, is exactly what we are dealing with when it comes to AI.
AI Risk Appears Where Things Converge
Most AI governance models assume a neat point of entry. A platform. A programme. A centre of excellence. Somewhere you can point to and say, this is where AI lives.
In reality, AI arrives from multiple directions at once.
One flow comes from productivity tools and copilots, embedded into everyday work. Another comes from engineering teams building models, pipelines, and automations. A third arrives quietly inside third party services that already sit deep in the organisation. Individually, each of these feels manageable.
The risk appears where they meet.
At the confluence of the rivers, the water behaves differently. Pressure builds, visibility drops, and movement becomes harder to predict. AI risk behaves in exactly the same way. It emerges at the intersection of data, identity, automation, and decision making, often in places nobody explicitly designed.
This is where many AI governance efforts struggle. They focus on approving use cases or selecting models, while the real risk forms in the gaps between systems, teams, and responsibilities.
Policy Is a Signpost, Not a Barrier
Near the edge of the falls there are signs politely suggesting you do not step too close. They do not physically stop you. They assume you are paying attention and that you understand the consequences.
Most AI policies work the same way.
Ethical principles, acceptable use statements, and governance frameworks are necessary, but they are not controls. They are signposts. Useful, visible, and very easy to ignore when deadlines loom or pressure builds.
Once AI becomes genuinely useful, people will use it. Not recklessly, but pragmatically. They will paste data into prompts, automate decisions, and trust outputs because it saves time. Telling people not to do this, without giving them safer ways to do it, is optimistic at best.
Effective AI governance starts from a more realistic assumption. It assumes people will step closer to the edge. It designs for that behaviour rather than pretending policy alone will prevent it.
That means identity based access, data controls that understand AI interactions, auditability, and monitoring that focuses on behaviour rather than intent. Governance that exists only on paper is already behind the flow.
Risk Lives in the Mist
If there is one constant at Iguaçu, it is the mist. It spreads far beyond the obvious danger zone, reduces visibility, and makes everything harder to judge. You can be standing well back and still end up soaked.
AI risk behaves the same way.
It is not confined to high profile use cases like customer facing chatbots or automated decision engines. It lives in prompts, training data, fine tuning, shadow usage, integrations, and outputs that quietly get reused elsewhere. Data exposure can happen without a breach. Bias can be introduced without malicious intent. Accountability becomes blurred very quickly when humans and models are making decisions together.
This is why traditional risk models struggle. They like clean boundaries. AI does not respect them.
Good AI governance is less about preventing every mistake and more about detecting problems early. You cannot eliminate the mist. You can only improve your visibility within it.
Control Slows You Down, Flow Exposes You
Watching the water, it becomes obvious that control and safety are not the same thing. You can slow the flow in places, but you cannot stop it without creating pressure somewhere else.
This tension sits at the heart of AI governance.
Lock things down too tightly and teams will route around you. Move too slowly and AI adoption will outpace your ability to understand the risk. Centralise everything and you become a bottleneck. Decentralise everything and you lose sight of what is actually happening.
The organisations handling this best are not searching for a perfect balance. They are constantly adjusting. They treat AI governance as a living system, not a fixed framework. They invest in telemetry, feedback loops, and rapid response rather than assuming they can design the final state up front.
They accept that some risk will materialise and focus on resilience rather than avoidance.
A Slightly Uncomfortable Thought, From Someone Still on Holiday
Standing there, damp, slightly deafened, and mildly concerned about my phone, it occurred to me that the water does not care how well thought out your plan is. It will keep flowing regardless.
AI adoption is the same. It is already happening, from multiple directions, and it is not waiting for governance models to catch up. The real question is not whether your organisation has an AI policy, but whether it understands where the flows converge, and how much visibility it has when things start to accelerate.
AI governance is not about stopping the river. It is about recognising that you are standing in the mist, whether you like it or not, and deciding how prepared you are when the flow changes.
Which, frankly, is not a bad thought to have while you are supposed to be relaxing.
Leave a comment