I do not tend to write follow‑up articles this quickly. Usually, if something feels incomplete, it is better to let the thinking settle rather than rush back to the keyboard. In this case though, I realised I had missed a genuinely important point in the previous piece, particularly around how organisations, and service providers too, are positioning generative AI in the development space and the familiar argument that inevitably follows around low code versus pro code.
Almost every Gen AI conversation eventually collapses into the same debate. Citizen developers versus professional engineers. Speed versus control. Low code versus pro code. It is an old argument, but AI has given it new momentum, and in some cases the impression that the underlying trade‑offs have changed more than they actually have.
What I did not spend enough time on previously was what this choice really means operationally. Not in theory, and not in vendor decks, but once these systems are live, embedded in real processes, and quietly relied upon. That gap is worth addressing properly.
Because when you strip away the hype, the low code versus pro code discussion is not primarily a tooling decision. It is an operating model decision. One that determines where behaviour lives, how risk accumulates, and what an organisation must be capable of sustaining once systems move from experimentation into day‑to‑day operations.
This distinction mattered less when systems were largely deterministic and change arrived in predictable release cycles. It matters far more now, as automation and AI are embedded directly into operational workflows. Once systems interpret intent, adapt to context, and act autonomously, delivery success is no longer the challenge. Running those systems safely, consistently, and explainably is.
This is the backdrop against which roles like Forward Deployed Engineers reappear. Not as a fashionable construct, but as a signal that the operating model has not yet caught up with how systems behave in reality.
Where behaviour lives, and why that matters operationally
At the heart of the low code versus pro code choice is a deceptively simple question: where does behaviour live once the system is in production?
In low code environments, behaviour is expressed declaratively. Logic is spread across configuration, orchestration, connectors, permissions, and platform defaults. This brings solutions closer to business intent and dramatically shortens feedback loops. It also means that understanding why something happened often requires reconstructing intent across multiple layers that were never designed to be read like source code.
In pro code environments, behaviour is concentrated into explicit artefacts. Code paths, tests, and pipelines make logic inspectable and traceable. This gives confidence and control, but it also increases the distance between business intent and system behaviour unless feedback loops are deliberately engineered.
Neither approach is inherently right or wrong. They simply place operational burden in different places. The mistake organisations make is assuming that burden disappears rather than moves.
Operating low code at scale: what changes in practice
Low code works exceptionally well when solutions stay close to the business, the blast radius is limited, and the people who built them remain involved. The problems start when those conditions change, often quietly.
To operate low code sustainably, governance has to arrive early and act as an enabler rather than a brake. This is where a Centre of Excellence becomes essential. Not as an approval board, but as a practical capability that publishes usable patterns, curates approved connectors, defines clear boundaries for what should not be built in low code, and intervenes when experimentation drifts into critical dependency. A CoE that only produces guidance will be ignored. One that removes friction for compliant behaviour will be used.
Operational safety in low code environments comes less from traditional build pipelines and more from strong boundaries. Clear separation between environments, explicit identity and connector policies, and data loss prevention designed in from the start all replace what CI discipline provides in pro code estates. If these controls are weak, risk accumulates invisibly.
Promotion into production also becomes a critical moment. In low code, the line between prototype and operational system is thin. Organisations need explicit rules for when review is mandatory, who can approve promotion, and what audit trail must exist. Without this, business‑critical automations appear without operational readiness.
Finally, low code concentrates understanding in people rather than artefacts. That is efficient, but fragile. Sustainable low code estates plan for churn by documenting intent, not just configuration, and by periodically reviewing automations that have quietly become critical.
When these elements are missing, organisations compensate socially. Experienced individuals step in to explain behaviour, stabilise systems, and bridge gaps between business teams and operations. This is one of the most common paths by which FDE‑like roles emerge.
Operating pro code in an AI‑heavy world
Pro code offers explicit control, but it does not eliminate operational risk. It simply changes its shape.
Running pro code at scale requires mature platform engineering. Standardised CI/CD, policy as code, identity embedded into pipelines, and clear service ownership are not accelerators, they are prerequisites. Without them, pro code becomes slow and brittle rather than safe.
Equally important, feedback loops must be designed deliberately. Pro code does not naturally surface business feedback. Telemetry needs to be tied to outcomes rather than just performance, and teams must be able to adjust behaviour post‑deployment without destabilising systems. AI makes this more acute, because systems can behave undesirably while remaining technically healthy.
Incident response also has to evolve. AI‑enabled systems often fail semantically rather than catastrophically. They continue to run while doing the wrong thing. Operations teams need to recognise behavioural drift and confidence degradation, not just outages.
When these capabilities are weak, organisations again fall back on people. Engineers and architects are pulled into production to interpret behaviour that the operating model never surfaced explicitly.
Hybrid estates and the importance of the seams
In reality, most enterprises adopt both approaches. Low code dominates productivity, automation, and experimentation. Pro code dominates systems of record and high‑risk integrations. The coexistence itself is not the problem. The seam between them is.
Hybrid estates require explicit decisions about which workloads must remain pro code, which can safely live in low code, and which may start low code but must graduate as they become critical. If these decisions are left implicit, failures become ambiguous and political rather than technical. Ownership blurs, accountability weakens, and organisations once again rely on individuals to hold things together.
This is another place where FDE‑like behaviour appears, regardless of job title.
Why reduced‑friction, intent‑driven interaction patterns matter here
Newer interaction patterns that reduce friction between intent and execution are only relevant in this discussion because they act as accelerants. They remove safety mechanisms organisations were relying on without admitting it. Human context, UI friction, and undocumented assumptions stop acting as brakes. Behaviour executes faster than governance adapts.
In low code estates, this exposes hidden dependencies and weak boundaries. In pro code estates, it exposes assumptions about fixed integration paths and predictable call graphs. These patterns do not introduce new problems. They make existing ones impossible to ignore.
What Forward Deployed Engineers are really telling you
Forward Deployed Engineers are not a delivery model. They are an operational signal.
They appear when ownership is unclear, behaviour cannot be easily explained, promotion rules are missing, and low code and pro code collide without a shared operating model. Mature organisations do not eliminate the judgement these people bring. They work to encode it into platforms, processes, and guardrails that scale beyond individuals.
The decision organisations are actually making
The low code versus pro code question is not about speed or ideology. It is about honesty.
- Where do you want complexity to live?
- Who do you expect to absorb uncertainty?
- And have you built an operating model that matches that choice?
- Low code demands early governance, strong boundaries, and enablement.
- Pro code demands engineering discipline, observability, and feedback loops.
- Hybrid demands clarity at the seams and the willingness to do both properly.
Most organisations do not fail because they chose the wrong tools. They fail because they chose tools without changing how they operate.
That is the real decision hiding behind the debate.
Leave a comment