Microsoft's Sherrod DeGrippo opened her RSAC 2026 briefing with a word that has dominated the AI and security conversation for the past year: speed. Then she moved past it. The real shift, she argued, is not that threat actors are moving faster (though they are). It is that AI is now embedded across the entire attack lifecycle, from reconnaissance through post-compromise, and that the next major target is the agent ecosystem organisations are racing to deploy.
That framing matters. It moves the conversation from "AI helps attackers do more of the same" to "AI fundamentally changes what the attack surface looks like." And if you are an architect or security leader planning an agentic AI deployment, the implications land uncomfortably close to home.
The numbers that should change the conversation
The headline statistic from Microsoft's research is stark. AI-enhanced phishing campaigns are achieving click-through rates of around 54%, compared to roughly 12% for traditional campaigns. That is not a marginal improvement. It is a 450% increase in effectiveness, driven not by volume but by precision. AI is enabling threat actors to localise content, adapt messaging to specific roles, and reduce the friction involved in crafting a lure that converts into access.
Pair that with infrastructure designed to bypass multifactor authentication and you get phishing operations that are more resilient, more targeted, and significantly harder to defend at scale. The old assumption that MFA provides a reliable safety net against credential theft needs revisiting, because adversary-in-the-middle techniques now intercept session tokens in real time and let attackers authenticate as legitimate users without triggering alerts.
Cybercrime as a supply chain
The Tycoon2FA operation, linked to the group Microsoft tracks as Storm-1747, is the clearest illustration of where this is heading structurally. This was not a single phishing kit. It was a subscription platform generating tens of millions of phishing emails per month, linked to nearly 100,000 compromised organisations since 2023, and at its peak responsible for roughly 62% of all phishing attempts Microsoft was blocking each month.
The important detail is not the scale alone. It is the modularity. One service handled phishing templates. Another provided infrastructure. Another managed email distribution. Another monetised access. It was composable, scalable, and available by subscription. Microsoft's Digital Crimes Unit disrupted it earlier this year, seizing 330 domains in coordination with Europol, but the goal was not simply to take down websites. It was to apply pressure to a supply chain, because cybercrime now operates on the same industrial economics as legitimate SaaS.
This is the model that AI accelerates. When sophisticated capabilities are packaged as services and sold on subscription, AI does not just help the sophisticated actors at the top. It lowers the barrier to entry for everyone who plugs into the ecosystem.
The agent surface problem
DeGrippo's most forward-looking point, and the one that should concern architects and security leaders most directly, is about the agent ecosystem. Her framing was blunt: the agent ecosystem will become the most attacked surface in the enterprise, and organisations that cannot answer basic inventory questions about their agent environment will not be able to defend it.
That is not a hypothetical. Organisations are already deploying AI agents across customer service, internal operations, development workflows, and business process automation. Many of those agents have access to sensitive data, can take actions on behalf of users, and operate with permissions that were granted once and rarely reviewed. The identity and access patterns around agents are, in most organisations, nowhere near as mature as those around human users. And the governance frameworks that might catch an agent behaving anomalously are, in most cases, still being designed.
The parallel to the Tycoon2FA supply chain is instructive. If cybercrime is industrialising around modular services, the agent ecosystem offers an ideal set of targets: distributed, often poorly inventoried, operating with standing permissions, and difficult to monitor at the behavioural level. An agent that has been compromised or manipulated does not look like an attacker. It looks like an agent doing its job.
AI across the full lifecycle
DeGrippo mapped AI's role across every phase of the attack lifecycle, and the pattern is consistent. In reconnaissance, AI compresses the time between target selection and first contact. In resource development, it generates forged documents and polished social engineering narratives at scale. For initial access, it refines deepfakes, voice overlays, and message customisation using scraped data. In persistence and evasion, it scales fake identities and automates communication that blends with normal activity. In weaponisation, it enables malware development and real-time debugging that adapts to the victim environment. In post-compromise operations, it adapts tooling to the specific target and, in some cases, automates ransom negotiation.
The objectives have not changed. Credential theft, financial gain, and espionage remain the core motivations. What has changed is the tempo, the iteration speed, and the ability to test and refine at scale.
What this means for defenders
Three themes from the RSAC briefing deserve architectural attention.
The first is the agentic threat model itself. The barrier to launching sophisticated attacks has collapsed. What once required nation-state resources is now accessible to a motivated individual with the right tools. Organisations need to model threats against their agent deployments with the same rigour they apply to human identity, and in most cases that means starting from scratch.
The second is inventory. You cannot defend what you cannot see. Knowing what agents you have deployed, what permissions they hold, what data they can access, and what actions they can take is not a compliance exercise. It is the baseline for any credible security posture in an agentic world.
The third is the changing role of the security analyst. DeGrippo's point that the analyst as practitioner is giving way to the analyst as orchestrator reflects a broader shift. The talent models organisations are hiring against today are already outdated, and the SOC of the future demands defenders who can manage and audit agentic systems, not just respond to alerts.
The moment to get ahead of this is now, not when the first agent-targeted campaign makes the headlines.
Source: Threat actor abuse of AI accelerates from tool to cyberattack surface




