This week I got introduced to a term I somehow missed: Forward Deployed Engineer, or FDE. I will be honest, it felt like one of those phrases I probably should already have been aware of, but clearly it passed me by until now. The slightly annoying part is that once you see it, you cannot unsee it. It explains a lot about why so many AI initiatives look impressive in a demo and then quietly stall the moment they meet the real world.
So a quick question to start. Have you come across the term FDE yet, or have you simply been doing the work under a different name.
What an FDE actually is
In plain terms, an FDE is an engineer deployed close to reality.
Not close to architecture diagrams, steering committees, or platform roadmaps, but close to the people doing the work, the systems that keep the lights on, and the processes that exist because an organisation has spent years optimising for survival rather than elegance.
An FDE sits in an uncomfortable middle:
- Technical enough to build, debug, and make changes safely
- Practical enough to work inside messy operational constraints
- Comfortable translating what users do into what systems should do
- Able to feed that learning back into engineering and product without turning it into a thesis
This is not a new idea. The novelty is the timing. AI, particularly agentic AI, has brought back a class of problems that most enterprises thought they had neatly outsourced to delivery phases, UAT, and release management.
Why this role is reappearing now
The simplest explanation is that AI has changed the nature of deployment.
Traditional enterprise software is largely deterministic. You build it, test it, deploy it, and then spend a long time pretending it is finished. Even complex systems behave within known boundaries. That is why project shaped delivery models work reasonably well.
Agentic AI does not behave like that. It is not just a feature. It is a behaviour layer sitting on top of tools, data, and workflows. It adapts. It changes. It can behave differently depending on context. It also tends to fail in ways that are harder to predict and harder to explain.
Once you are in that world, the limiting factor stops being model capability and starts being operational reality:
- How it is wired into real processes
- How access is controlled
- How actions are audited
- How failures are handled
- How trust is maintained
- How changes are rolled out without breaking everything
That is why FDEs are back. They are a response to the fact that production is now a feedback loop, not an event.
The part most organisations are missing
Most enterprises are not short on AI ideas.
They have pilots, proofs of concept, copilots, and the occasional agent doing something useful in a narrow context. The problem is that many of these initiatives are still being delivered using models designed for static systems: neat projects, tidy handovers, and a quiet assumption that the thing will behave itself once it is live.
That assumption does not hold for agentic systems.
The real work starts when the system meets real data, real users, real permissions, and real legacy constraints. That is when confidence is either built or lost, usually very quickly.
The FDE concept forces an uncomfortable question: if you need engineers embedded close to operations to make AI valuable, what does that say about your operating model for change.
This is not a criticism. Most organisations have been optimised for stability. Agentic AI demands continuous adjustment, and many enterprises are not structurally set up for that.
Why agentic AI makes this sharper
There is a big difference between a copilot that drafts an email and an agent that takes actions across systems.
The moment an agent can do things like:
- update a record
- trigger a workflow
- open or close a ticket
- approve or reject something
- move money, even indirectly
- change access, even indirectly
…you have moved from “helpful tool” into “operational actor”.
There is a big difference between a copilot that drafts an email and an agent that takes actions across systems. The moment an agent can update records, trigger workflows, or make decisions that have real consequences, you have moved from “helpful tool” into “operational actor”.
If you have been following Microsoft’s recent messaging on agents, this will all sound familiar.
The awkward questions are no longer optional:
- Whose identity is it acting under
- What permissions does it have, and why
- How do we audit and replay what it did
- How do we constrain it without neutering it
- What is the failure mode, and who gets paged
- What is the safe rollback plan when it goes wrong
The reason FDEs matter here is that these questions are not solved by a single policy or a single control. They are solved by lots of small engineering and operating decisions, made repeatedly, in context, and adjusted as reality changes.
That is exactly the gap the FDE sits in.
When pilots collide with reality
This pattern is familiar to anyone who has seen real deployments.
An AI pilot works well in isolation. Then it is connected to live systems. Permissions are slightly wrong. Data is incomplete. Knowledge is stale. The agent takes a surprising path because it found a shortcut nobody intended.
Trust evaporates far faster than it is built.
At that point, success depends on how quickly you can close the loop between what is happening in production and what needs to change in the system. Without that loop, pilots quietly stall and get rebranded as learning exercises.
The FDE exists to keep that loop tight. Not as a hero role, but as a mechanism for continuous alignment.
What this means for clients
If you are a client, the blunt version is this.
If you are serious about agentic AI, you need to plan for the reality that build is only the start. Once agents are live, the hard problems become ownership, permissions, auditability, escalation paths, and accountability.
A few practical questions that separate “interesting” from “deployable”:
Ownership and accountability
Who owns the agent once it is in production. Not who built it, who owns the outcome.
When the agent does something ambiguous rather than clearly right or wrong, who decides what “good” looks like, and who signs off changes.
Controls and boundaries
How permissions are handled. Is it acting as a user, as an application, or as a brokered identity. If you cannot explain this cleanly, you will not get past any serious risk function.
What boundaries exist around tools, data, and actions. Not abstract principles, actual constraints.
Audit and evidence
How you prove what happened after the fact. Not just logs, but a coherent chain of reasoning, action, and authorisation.
How you handle drift. The system will change behaviour over time because the world changes, the data changes, and users change how they interact with it.
Operational model
What the escalation path is. If an agent fails, does it fail safe. Who is alerted. How quickly can it be stopped.
How updates are managed. You will need release discipline, and you will need to test changes in ways that are different from traditional software.
You might not need the title Forward Deployed Engineer, but you do need the capability.
What this means for GSIs like IBM
For GSIs, including organisations like IBM, this is an equally uncomfortable shift.
Much of the industry is still structured around projects, phases, and handovers. That model works when the system you are delivering is largely static. It works much less well when the system evolves over time and is embedded into core business processes.
Agentic AI pushes GSIs toward longer lived engagement models, embedded teams, and shared accountability for outcomes. Less emphasis on the moment of delivery, more emphasis on the lifecycle that follows.
And it changes the skills mix in a way many delivery organisations are not currently set up for.
The uncomfortable skills mix
The FDE sits somewhere between engineer, architect, and operator. Comfortable in code, comfortable in conversations with the business, and accountable when things do not behave as expected.
In most GSIs, those skills are split across roles that do not naturally sit in the same room. You get:
- architects who can speak to governance but do not touch the build
- engineers who can build but are kept away from messy business nuance
- operations teams who inherit the system but did not shape it
Agentic AI punishes those separations, because the feedback loop between real usage and system behaviour is the product.
The operating model implication
If a GSI wants to stay relevant in this space, it has to be able to:
- deploy teams close to real work
- iterate quickly and safely
- operate what it builds, not just hand it over
- embed governance as a runtime discipline, not a post project artefact
That does not mean every engagement becomes a permanent managed service. It does mean the contract and delivery model needs to reflect reality rather than the comfort of a neat end date.
Where the FDE fits, organisationally
One thing that is easy to miss is that FDE is not necessarily a person. It is a capability. You can build it in different ways.
Option one: a dedicated FDE function
A group that can deploy into teams or client environments, close gaps quickly, and feed learnings back into engineering and delivery patterns.
The risk here is that it becomes a dumping ground for every hard problem nobody owns. If you do this, you need clear missions and scope.
Option two: embed the FDE pattern into delivery teams
Instead of a separate role, you design delivery teams so that at least one person has the engineering depth plus operational and stakeholder fluency to stay close to reality.
This is harder to do at scale, but tends to be healthier long term.
Option three: rotate engineers into forward deployed work
This is common in strong product organisations. Engineers spend time close to deployments, then return with learning that improves the platform and delivery patterns.
Most enterprises and GSIs are not culturally built for this, but it is arguably the most effective approach.
The risk of getting this wrong
There is a failure mode worth calling out.
Some organisations will treat FDEs as a permanent human patch for immature platforms and weak operating models. Instead of improving controls, governance, and repeatability, they will rely on embedded engineers to firefight indefinitely.
That works until scale arrives. Then it turns into expensive heroics, burnout, and a collection of bespoke fixes that never become systemic improvement.
The goal of an FDE model should be to accelerate learning and reduce the need for heroics over time, not normalise them.
If you are adopting this pattern, the success metric is not “how many FDEs do we have”. It is “how quickly can we make the system and the operating model require fewer FDE interventions”.
Where do we move next
If nothing else, I am relieved I finally learned the term. Slightly annoyed it took me this long, but we move.
So where do we move.
We move from treating AI as something you deploy, to something you have to operate.
For clients, that means accepting that agents and copilots do not quietly behave themselves once they are live. The hard problems start after go live, not before it.
For GSIs, it means being honest about delivery models and getting comfortable with longer term ownership, operational accountability, and continuous change.
The Forward Deployed Engineer is not the destination. It is a signal.
It tells us AI has crossed from interesting technology into operational reality. And once you are there, the question stops being how clever the model is, and starts being how well the organisation adapts around it.
Inspiration: https://www.forbes.com/councils/forbestechcouncil/2025/12/29/the-rise-of-the-forward-deployed-engineer-a-new-career-path-for-the-native-ai-era/
Leave a comment