Introduction, and the source of the uncomfortable truth
This week, I have been revisiting the IBM Institute for Business Value research, Secure by design, smarter with AI. One finding should make every leadership team pause: 42 percent of executives rank their own operational shortcomings among their top cybersecurity threats, ahead of nation states and just behind cybercriminals. When executives say, in effect, “our own house is the bigger risk,” that is not theatre, it is a reality check. The same report shows only 40 percent have embedded governance, risk, and compliance directly into their workflows, despite broad agreement that secure by design is critical. It is a gap between aspiration and execution, and AI is widening it.
If we are honest about the last few years, we have spent far more time on tool selection, pilots, and proofs, than on the dull but decisive work of governance that keeps those tools safe, accountable, and auditable. AI is now inside everyday decisions, and inside the connective tissue of our ecosystems. That means any weakness in how we operate will show up quickly, and publicly.
The business problem, stated simply
Most organisations can articulate their AI ambition. Far fewer can articulate the operating model that keeps AI safe, measurable, and responsive when the unexpected arrives. The problem is not intent, it is cadence. Governance that moves quarterly will not keep pace with systems that update themselves daily. And when teams work in silos, architecture and operations lose the continuous dialogue they need to adapt in real time.
The common symptoms are familiar: policy frameworks document principles but do not enforce them, telemetry exists but is not correlated across environments, and accountability lives in slide decks rather than in live metrics and decision rights. At the edges, the ecosystem is even more fragile. A majority of organisations struggle to collaborate across the operations lifecycle, and only 25 percent actively monitor third party AI model inputs and outputs. If the models that shape your decisions are treated as black boxes, then so is your risk.
Lessons that actually change outcomes
Here are five lessons that move the conversation from aspiration to execution. They are drawn from IBV findings, from work with enterprise teams, and from the uncomfortable truths that surface when things go wrong.
Make governance a design property, not a gate
Secure by design, in practice, means security and privacy are embedded through the entire life cycle, from architecture to operations. It reads like common sense, yet the execution gap is real. The lesson is to turn controls into code. When policy is declarative and machine enforceable, the audit trail writes itself, and exceptions become visible as they happen.
What this looks like in the real world
- Policies expressed as machine readable rules, linked to live telemetry.
- Automated anomaly detection that triggers corrective workflows without manual escalation.
- Change records and risk decisions stored as operational artefacts, not meetings.
Synchronise architecture and operations
Most breaches exploit not just a vulnerability, but a coordination failure. IBV highlights the fracture between architecture and operations, and shows the efficiency gains when the two are kept in continuous dialogue. The lesson is to build one adaptive system, not two independent ones.
Practical moves
- Shared runtime telemetry for design and operations teams.
- A single playbook for incident feedback into design improvements, measured in days, not quarters.
- Standardisation in coding and testing, so controls scale rather than fragment.
Treat the ecosystem as a security surface
Real resilience now depends on suppliers, managed services, and external AI platforms. Protection cannot stop at your boundary. IBV calls the next phase federated trust: shared identity frameworks, shared telemetry, and adaptive validation across the value chain.
Where most programmes stumble
- No visibility into third party model inputs, outputs, or update practices.
- Contracts that define responsibilities, but not the signals that prove them.
Corrective steps
- Require model input and output monitoring from suppliers, with bias and integrity checks published to a shared dashboard.
- Adopt shared identity for human and non human actors, with conditional access that is enforced, not asserted.
Make executive sponsorship measurable, not ceremonial
The research is clear: C level sponsorship and consistent funding are critical success factors for secure by design. The lesson is to tie sponsorship to accountable outcomes: fewer incidents is useful, but faster incident response, higher efficiency across IT and security, and better customer trust are what move the needle.
Translate sponsorship into numbers
- A quarterly resilience score that blends time to detect, time to contain, and time to recover.
- An efficiency index for architecture and security workflows, benchmarked to the 11–25 percent gains cited by optimised secure by design programmes.
- A trust index that tracks supply chain resilience and brand perception.
Close the three gaps that slow programmes down
IBV identifies practitioner friction, a perception versus performance gap, and an efficacy gap where capabilities outpace outcomes. The lesson here is straightforward: invest in usability, training, and operational discipline. Controls that are hard to use will not be used. Teams that believe they are efficient, but are not, will not change.
Runbooks that help, not hinder
- Reduce tool sprawl and integrate controls into existing developer and operator workflows.
- Publish reality checks that compare perceived efficiency to actual outcomes.
- Treat training and reskilling as part of the control budget, not optional extras.
A practical operating model leaders can adopt
If you need a way to bring these lessons into the next quarter, here is a simple model that aligns governance, engineering, and operations.
Outcomes and measures
Define resilience as an operational outcome. Agree three measures that matter to your board and your clients:
- Responsiveness: time to detect, contain, and recover.
- Integrity: evidence that data, models, and decisions meet your standards, including supplier models.
- Trust: externally published metrics and client proof points.
Controls as code
Adopt a policy engine that translates governance into programmatic controls.
- Policies expressed as rules with versioning, approvals, and rollback.
- Automated enforcement and exception handling.
- Continuous evidence collection that supports audits without special projects
Telemetry, correlation, and feedback
Build a single stream of telemetry across architecture and operations.
- Real time signal correlation, with anomalies raising visible exceptions.
- Incident feedback loops that adjust designs, not just procedures.
Federated trust across the ecosystem
Extend identity, telemetry, and validation across partners and providers.
- Shared identity for agents and services, with lifecycle governance.
- Monitoring of third party model inputs and outputs, with published.
Culture, roles, and skills
Assign accountable owners for governance, design, operations, and supplier management.
- Define decision rights for risk acceptance and design change.
- Invest in training to close the efficacy gap.
The uncomfortable but necessary decisions
Leaders will need to make a few choices that are more cultural than technical.
- Choose transparency over optics. Publishing resilience and trust metrics will expose weaknesses. It will also create the pressure that improves them.
- Retire controls that do not scale. Manual reviews and periodic governance do not survive AI speed. Automate or replace.
- Hold suppliers to the same standards. If your internal teams monitor model behaviour and bias, your partners should too.
None of this is glamorous. All of it is the work that changes outcomes.
Bringing it back to the statistic that started the week
If 42 percent of executives see their own operational shortcomings as a top threat, then the response is not another promise, it is a new operating model. The aim is not perfection, it is continuous alignment: architecture and operations in dialogue, governance at design time, and trust engineered across the ecosystem. When you do that well, resilience stops being a promise and becomes a property.
Closing thought
We can keep blaming external actors, or we can fix what is under our own roof. My vote is to do the latter, with enough transparency to keep us honest, and enough automation to keep us fast.
Sources and citations
IBM Institute for Business Value, Secure by design, smarter with AI (November 2025).
Secure by design with AI for cyber resilience | IBM
Want more cloud insights?
Listen to Cloudy with a Chance of Insights podcast: Spotify | YouTube | Apple Podcasts
Leave a comment