Microsoft Research published its annual New Future of Work report this week. I am sure that David (see podcast for more context) will have beaten me to this one. For the rest of us, the obligatory optimism in the opening paragraphs requires some patience, but stick with it, because the researchers are more candid about AI's limitations than you might expect from a Microsoft-owned publication. The report draws on research from inside and outside Microsoft, covering AI adoption, labour market effects, human-AI collaboration, and the changing nature of specific professions.
The headline finding is that AI is driving change faster than any previous technology wave. That claim appears in almost every AI report published right now, so it is easy to tune out. But the supporting evidence here is more interesting than the summary suggests. The researchers are not just pointing at adoption curves. They are pointing at structural shifts in how work gets done, who benefits from those shifts, and what the longer-term consequences might be. The picture is messier than the productivity narrative you will hear from most vendors.
The time savings that are not quite savings
The productivity case for AI at work has been built largely on self-reported time savings. The Microsoft Research report cites enterprise users saving 40 to 60 minutes a day, and model-based evaluations showing frontier AI systems approaching expert-level quality on a growing range of tasks. On paper, that looks compelling. In practice, it is considerably more complicated.
The report introduces a term that deserves wider use: workslop. This is AI-generated content that looks polished but is not accurate or useful. In a US survey cited in the report, 40% of employees said they had received workslop from colleagues in the past month. Think about what that means operationally. Someone uses AI to produce something faster, hands it over, and the recipient now has to work out whether it is actually correct. If it is not, the time saved at the point of production has been transferred, and often multiplied, at the point of consumption. The net productivity gain is not 40 to 60 minutes. It may be negative.
This is not an argument against AI. It is an argument against the lazy version of the productivity narrative, the one that treats time-to-output as a proxy for value and ignores what happens downstream. Anyone who has reviewed a large volume of AI-assisted work knows the pattern. Certain things are faster. Other things require more careful checking than they used to, because the confidence of the output has decoupled from its reliability.
The report also notes something that gets little airtime in vendor communications: AI can reduce productivity. Not just slow it down at the margins, but actively make things worse when the tool does not understand the user's goals, or when the user overestimates what the tool can do. These are not edge cases. They are predictable failure modes that emerge from the current state of the technology and from the way most organisations are deploying it, which is to say, without much deliberate thought about either.
The junior problem is real and it is structural
One of the more striking findings in the report concerns entry-level employment. The data suggests that employment for workers aged 22 to 25 in highly AI-exposed jobs declined by 16% relative to comparable roles with lower AI exposure. Hiring into junior positions appears to slow after firms adopt AI. The researchers flag this as a longer-term concern rather than just a near-term market adjustment, and they are right to do so.
The reason this matters is not just the immediate employment effect. It is about how expertise develops. Junior roles exist, in large part, because they are where people learn. They are where someone gets corrected, develops judgement, builds an instinct for what good looks like. If those roles are automated away, the pipeline for the next generation of expert practitioners narrows. You end up with organisations that are relying on AI to do work that humans no longer know how to do, and no obvious way to rebuild that capability if the AI gets it wrong.
There is a version of this story that is optimistic: AI frees junior workers from drudge work and lets them focus on higher-value activities earlier in their careers. The report entertains that possibility. But the empirical evidence currently points the other way. The automation of entry-level tasks is not obviously creating new entry-level opportunities at the same rate. The optimistic version requires deliberate intervention. It will not happen on its own.
Expertise matters more, not less
The finding that runs through the entire report, and the one that deserves the most emphasis, is that human expertise is becoming more valuable as AI improves, not less. This cuts directly against the replacement narrative that dominates public discussion.
The mechanism is worth understanding. As AI takes on more of the execution work, the critical bottleneck shifts to judgement. Someone has to decide whether the AI output is correct. Someone has to spot the hallucination, identify the subtle error, recognise when the output is technically accurate but contextually wrong. Someone has to ask the right question in the first place, and frame the problem in a way that produces a useful answer. None of that is easy. All of it gets harder, not easier, if the person doing it has spent years delegating their thinking to a machine.
The report describes a shift from "thinking by doing" to "choosing from outputs." Writing a document forces you to think. Prompting AI to write a document and then editing it is a different cognitive activity, and not necessarily a more demanding one. The researchers raise a specific concern about repeated cognitive offloading: the effect carries over even when AI is removed. People who rely heavily on AI for certain tasks appear to lose some capacity to perform those tasks independently. That is a meaningful finding, and it has implications for how organisations should be thinking about AI adoption, not just whether to adopt, but how to structure the work so that human capability is maintained rather than hollowed out.
This is particularly relevant in knowledge work, where the value of an experienced professional lies precisely in their accumulated judgement. An architect, a senior analyst, a seasoned consultant does not just produce output. They know what questions to ask, what risks to look for, what the client actually needs as opposed to what they asked for. AI can produce a plausible-looking version of the output. It cannot, at least not yet, replicate the judgement that ensures the output is actually right for the situation.
The practical implication is that organisations which treat AI as a way to reduce headcount by replacing experienced people with AI plus cheaper labour are making a category error. The experienced people are not doing what the AI does. They are doing something the AI cannot do yet, and their value increases as AI handles more of the execution. The organisations that understand this, and invest accordingly in developing and retaining expertise, are the ones that will be in a stronger position as the technology matures.
What this means in practice
The Microsoft Research report does not say any of this quite as directly as I have. It is an academic publication with a collaborative authorship list, which tends to produce careful, hedged language. But the evidence it presents supports a fairly clear set of conclusions.
AI will not automatically make your organisation more productive. The time savings are real in some contexts and illusory in others, and the difference depends heavily on whether people are using AI thoughtfully or just using it. The quality of AI-assisted output depends on the quality of the human oversight applied to it. That oversight requires expertise, and expertise requires investment in the people who have it and the conditions that develop it.
The productivity gains that are real tend to accrue to people who already have strong domain knowledge. They use AI to do more, faster, with their existing judgement intact. The productivity losses, or at least the risks, tend to concentrate where AI is used to substitute for judgement rather than support it, whether that is a junior worker being replaced before they have had a chance to develop expertise, or a senior worker who has gradually delegated so much thinking to the machine that their own judgement has atrophied.
None of this makes AI less important. It makes it more important to be deliberate about how you use it. The organisations that will do well are not the ones that adopt fastest. They are the ones that adopt most thoughtfully, preserving and developing the human expertise that makes AI output valuable rather than just plentiful.
The full New Future of Work 2025 report is worth reading if you work in this space. It is more rigorous and more honest than most of what gets published on AI and work, and the parts that are uncomfortable are the parts most worth sitting with.
Source: New Future of Work: AI is driving rapid change, uneven benefits — Microsoft Research




