The Microsoft Cloud Blog

Expert insights on Microsoft Azure, Cloud Architecture, and Enterprise Technology

Not All Assets Are Equal: How Microsoft Defender Is Rethinking Protection for High-Value Systems
5 min read

Not All Assets Are Equal: How Microsoft Defender Is Rethinking Protection for High-Value Systems

There is a concept in security that most organisations understand in theory but rarely act on in practice: not all systems carry the same risk. A developer workstation and a domain controller are both endpoints, but compromising one gives you a foothold while compromising the other gives you the keys to the kingdom. The gap between those two outcomes is enormous, and yet many organisations apply broadly the same detection thresholds and protection logic to both.

Microsoft has been pushing on this problem for a while, and a recent research post from the Defender Security Research Team sets out in some detail how asset-aware protection actually works in practice. It is worth unpacking, because the approach is more sophisticated than the marketing language usually suggests.

What makes an asset "high value"

The framework starts with Microsoft Security Exposure Management, which builds a continuously updated inventory of assets across devices, identities, cloud resources, and external attack surfaces. The system automatically classifies High-Value Assets (referred to as HVAs) based on their role and function in the environment. Domain controllers, Exchange and SharePoint servers, certificate authorities, identity synchronisation systems like Microsoft Entra Connect, and internet-facing services all qualify.

The point of the classification is not just labelling. It changes how Defender evaluates activity on those systems. Behaviours that would generate a low-confidence alert on a general-purpose server are elevated to high-confidence prevention when observed on a Tier-0 system. The same process execution, the same command-line pattern, and the same file operation can produce entirely different responses, because the asset context is different.

This matters because the most common detection methods rely on behavioural signals without asset context. Administrative tools, scripting frameworks, and system utilities look identical whether they are being used legitimately or maliciously. On a domain controller, the threshold for intervention needs to be lower, because the blast radius if you get it wrong is dramatically higher.

What the real-world attacks actually look like

The research post walks through three attack scenarios that illustrate how this plays out, and they are recognisable to anyone who has spent time in enterprise security.

The first involves credential theft from a domain controller. The attack chain starts with a compromised internet-facing server, moves laterally to a server with broader network access, sets up an NTLM relay trap, captures Domain Admin credentials through that relay, and then authenticates to a domain controller to dump the NTDS.DIT database using ntdsutil.exe. Each individual step (reverse SSH tunnel, scheduled task creation, ntdsutil execution) can appear in legitimate administrative contexts. Together, on a domain controller, they are unambiguous. Defender blocked the credential extraction and automatically disabled the Domain Admin account, stopping the attack before the database was accessed.

The second scenario involves webshells on Exchange and SharePoint servers. When Defender identifies a system running the IIS role (particularly one hosting Exchange or SharePoint) it applies targeted inspection to web-accessible directories. In one observed case, a threat actor with elevated access inside the network dropped a customised webshell into the Exchange Web Services directory. It had file upload, download, and in-memory code execution capabilities. Because the device was classified as an internet-facing Exchange server, the risk profile was elevated and the file was remediated immediately on creation.

The third involves remote credential dumping from identity infrastructure. This covers scenarios where attackers attempt to access Active Directory database files, registry hives, or identity synchronisation data remotely. The context of the target system (domain controller, Entra Connect server) triggers stronger prevention than the same activity on a non-critical host.

Why this approach is the right direction

The underlying logic here is sound, and it addresses something that has been a genuine weakness in endpoint detection for years. Flat detection models produce noise on critical systems and miss things on less-watched ones. Context-aware detection, where the asset role and exposure graph inform the response, is a more honest way to think about risk.

The attack path analysis capability in Microsoft Defender is particularly interesting in this context. It maps how identities, secrets, misconfigurations, and resources are interconnected, which means security teams can query the environment to understand not just what is at risk but how an attacker could move between assets. If a CI/CD secret has a path to a domain controller through a series of overprivileged roles, that is visible before an attacker finds it.

That said, the framework depends on accurate asset classification. Microsoft's advisory is refreshingly direct about this: the automatic identification is good but not perfect, and organisations need to review their environments to confirm that all truly high-value assets are tagged. A server running a privileged service that does not fit the standard HVA categories might not be automatically identified. Gaps in classification lead to gaps in protection.

What you should actually do

Three things are worth acting on regardless of your current Defender configuration.

Review your HVA classification. Log into Security Exposure Management and look at what has been automatically tagged. Check for obvious gaps (privileged service accounts, non-standard certificate authorities, legacy systems with broad network access that predate modern classification logic).

Prioritise HVA alerts. This sounds obvious but is often not operationalised. An alert from a domain controller should have a shorter SLA for investigation and response than an alert from a developer workstation. If your SOC triage process does not reflect this, it needs to.

Triage vulnerabilities with asset context. A moderate vulnerability on a domain controller is more urgent than a high-severity finding on a non-critical endpoint. Most vulnerability management tools will sort by CVSS score by default. That default is wrong for environments with well-defined HVAs.

The capability described here is already available to Microsoft Defender XDR customers. The question is whether your organisation has done the classification work to make it effective.

Source article

Want more cloud insights? Listen to Cloudy with a Chance of Insights podcast:

Spotify | YouTube | Apple Podcasts

Like this article?

Comments

Loading comments...

Richard Hogan - Cloud Solutions Architect

Richard Hogan

Global Chief Architect | Microsoft Practice | IBM

Richard is a Cloud Solutions Architect with 20+ years of experience in enterprise technology. He specializes in Microsoft Azure, cloud migration strategies, infrastructure automation, and enterprise architecture. Richard is the founder of The Microsoft Cloud Blog and co-host of the Cloudy with a Chance of Insights podcast. Regular speaker at tech conferences and active contributor to the Microsoft Tech Community.

You might also like

Practical discussions on cloud engineering, architecture, and the reality behind the diagrams.

Bi-weekly reflections on cloud architecture, Azure, and the decisions teams wrestle with in practice.