Defender Exclusions: The Mistake That Turns Malware Invisible

Was this helpful?

There are two kinds of Windows estates: the ones that have Defender exclusions, and the ones that don’t know they have Defender exclusions.

If you’ve ever chased a “mysterious” persistence that survived reboots, updates, and heartfelt conversations with the helpdesk, there’s a decent chance you were fighting something you told Defender not to see.

What exclusions really do (and why they’re seductive)

Defender “exclusions” sound like a minor tuning knob. They are not. They are a security boundary change. When you exclude a path, process, or extension, you’re telling a kernel-integrated AV engine to stand down in specific places—often the places malware prefers to live because they’re busy, noisy, and full of trusted binaries.

Most exclusions don’t come from malice. They come from performance panic. A build server starts thrashing. A VDI pool suddenly feels like it’s running on pocket calculators. Someone runs a benchmark, sees the antivirus filter driver in the stack, and the organization’s survival instinct kicks in: “Exclude the hot folder.” The hot folder becomes a hot zone. Then it becomes the blind spot you later explain to a board.

Exclusions are not all equal

Defender supports multiple exclusion types. Each has different blast radius and different abuse potential:

  • Path exclusions: “Don’t scan anything under this folder.” If that folder is writable by non-admins, you just built a malware daycare.
  • Process exclusions: “Don’t scan files opened by this process.” This is a favorite for developers (compilers, build tools) and attackers (living-off-the-land via trusted processes).
  • Extension exclusions: “Don’t scan files with this extension.” This is how you accidentally create a safe harbor for renamed payloads.
  • Network/UNC exclusions: often applied to reduce file-server load; often misunderstood; frequently inconsistent across endpoints.

Why people keep doing it anyway

Because it works. Exclusions can reduce CPU, IO, and latency spikes. They can smooth developer builds. They can eliminate “why is Outlook freezing?” tickets. They can even be necessary for some legacy apps.

But exclusions are a trade: you’re paying performance with detection coverage. If you don’t quantify that trade, and if you don’t bound it with controls, you’re not optimizing. You’re turning off headlights to make the car go faster.

One quote that belongs on every AV exclusion ticket: “Hope is not a strategy.” — Gene Kranz

That’s the whole problem in one sentence. Most exclusion decisions are hope-based. “Hope nothing bad lands there.” “Hope only the build system writes to that folder.” “Hope attackers won’t notice.” Attackers notice.

How attackers weaponize exclusions

Attackers don’t need to disable Defender globally. They just need a place to breathe. Exclusions give them a place to breathe, and they tend to be stable across reboots and updates because they’re policy, not a transient runtime state.

Attack pattern 1: “Live in the excluded path”

If C:\Build\ or D:\Tools\ is excluded and writable, the attacker drops payloads there and runs them from there. Even if you later scan the disk, the on-access scanning that would have blocked execution never happened. Some orgs eventually run offline scans and wonder why it’s clean; the malware is actively avoiding being touched by anything that triggers the scanner.

Attack pattern 2: “Abuse a process exclusion”

Process exclusions are sneakier. If you exclude devenv.exe, msbuild.exe, node.exe, or some in-house agent that reads and writes tons of files, you have effectively told Defender: “If that process touches a file, don’t look too hard.”

An attacker who can run code under an excluded process (or inject into it, or coax it into loading malicious content) gets a detection downgrade. It’s not always a full bypass, but it’s enough to slip by noisy, file-based detections.

Attack pattern 3: “Win the governance war”

In large environments, exclusions are managed through Group Policy, Intune, SCCM/ConfigMgr, or third-party EDR policy layers. Attackers love complexity. If they gain admin on a single machine, they may not be able to change centrally enforced settings—unless your change control is weak and you allow local overrides, or your policies are “tattooing” old settings that never get cleaned up.

Even without changing policy, they can take advantage of existing exclusions that nobody remembers. The easiest backdoor is the one you already built and forgot about.

Short joke #1: Exclusions are like “temporary” firewall rules—everyone remembers adding them, nobody remembers removing them.

Attack pattern 4: “Hide in plain storage”

Here’s the storage-engineer angle: people exclude high-churn folders because they cause IO amplification. Defender scans open/close operations; your storage sees it as extra reads, metadata churn, and cache misses. On remote shares it becomes a latency multiplier. So teams exclude entire volumes, profiles, or data roots, often on the machines that have the best credentials (admins, build agents, deployment boxes). That’s not just risky. That’s a gift basket.

Facts and history you can use in arguments

Some context points that help when you’re trying to convince a busy organization to stop treating exclusions like free candy:

  1. Exclusions predate modern EDR thinking. Traditional AV engines were file-centric; exclusions were a blunt tool to keep systems usable.
  2. Performance pain drove early enterprise AV “best practices.” Back when spinning disks and single-digit GB RAM were common, excluding large trees was routine.
  3. Microsoft Defender has evolved into a platform. What people call “Defender” now includes cloud protection, behavior monitoring, ASR rules, and tamper protection—meaning exclusions interact with multiple layers.
  4. Attackers track enterprise conventions. Build folders, temp directories, and tool caches are common across companies; once you exclude them, you’ve standardized the hiding place.
  5. Extension exclusions are historically abused. Renaming a payload to a “safe” extension (or wrapping it in a container) is an old trick that still works surprisingly often.
  6. Policy drift is real. Environments accumulate exclusions from migrations, vendor installs, and “just for now” fixes, and they persist through refresh cycles.
  7. UNC/share scanning has always been contentious. Do you scan on the client? The server? Both? The wrong answer is “neither,” which is what broad exclusions often create.
  8. Defender’s visibility is log-driven. If you’re not collecting Defender operational logs (and correlating with policy), you’re basically guessing.
  9. Exclusions can undermine incident response. If you collect artifacts from excluded paths, your live response tools may see files that Defender ignored at execution time—leading to “why didn’t it alert?” arguments.

Fast diagnosis playbook

When something feels off—missed detections, suspicious persistence, or a machine “too clean” during an incident—don’t start by reinstalling the agent. Start by answering three questions in order.

First: Are exclusions present, and where did they come from?

  • Check local Defender exclusion lists.
  • Check whether settings are managed by policy (GPO/MDM).
  • Check tamper protection state.

Second: Are the excluded locations writable by the attacker’s likely privilege?

  • For endpoints: can a standard user write there?
  • For servers: can the service account or build agent write there?
  • For shared volumes: who has Modify on the share and NTFS?

Third: Did Defender log anything relevant, or is logging/telemetry missing?

  • Look at Defender operational logs around the event time.
  • Confirm the device is reporting to your central security console.
  • Check if your SIEM is ingesting the right channels.

If you do those three steps, you usually find the bottleneck fast: either (a) exclusions created a blind spot, (b) policy is misapplied/drifted, or (c) you’re flying without telemetry.

Practical tasks: commands, outputs, and decisions

These are designed for responders and platform owners. Run them on a representative endpoint, then on a “known bad” machine if you have one. Every task includes the command, example output, what it means, and the decision you make.

Task 1: List all Defender exclusions (paths, processes, extensions)

cr0x@server:~$ powershell -NoProfile -Command "Get-MpPreference | Select-Object -ExpandProperty ExclusionPath; Get-MpPreference | Select-Object -ExpandProperty ExclusionProcess; Get-MpPreference | Select-Object -ExpandProperty ExclusionExtension"
C:\Build\
C:\Tools\Cache\
node.exe
msbuild.exe
.tmp
.log

Meaning: Defender will ignore those locations/process interactions/extensions.

Decision: If any excluded path is writable by non-admins or service accounts that process untrusted inputs, treat it as a priority risk and plan a rollback or narrowing.

Task 2: Check whether Defender settings are managed (and by what)

cr0x@server:~$ powershell -NoProfile -Command "Get-MpComputerStatus | Select-Object AMRunningMode,AntivirusEnabled,RealTimeProtectionEnabled,IsTamperProtected"
AMRunningMode          : Normal
AntivirusEnabled       : True
RealTimeProtectionEnabled : True
IsTamperProtected      : True

Meaning: Defender is on, RTP is on, and tamper protection is enabled.

Decision: If tamper protection is off on endpoints, fix that before you argue about exclusions. Tamper protection won’t solve everything, but it raises the cost of local meddling.

Task 3: Identify broad exclusions quickly (the “entire drive” smell test)

cr0x@server:~$ powershell -NoProfile -Command "Get-MpPreference | Select-Object -ExpandProperty ExclusionPath | Where-Object {$_ -match '^[A-Z]:\\$' -or $_ -match '^[A-Z]:\\$|^[A-Z]:\\' }"
C:\
D:\

Meaning: Somebody excluded whole volumes. That’s not tuning; that’s surrender.

Decision: Open an incident-level ticket. This is urgent unless the host is isolated and compensated by other controls.

Task 4: Check ASR rule state (and whether exclusions are masking risky behavior)

cr0x@server:~$ powershell -NoProfile -Command "Get-MpPreference | Select-Object -ExpandProperty AttackSurfaceReductionRules_Ids; Get-MpPreference | Select-Object -ExpandProperty AttackSurfaceReductionRules_Actions | Select-Object -First 5"
BE9BA2D9-53EA-4CDC-84E5-9B1EEEE46550
D4F940AB-401B-4EFC-AADC-AD5F3C50688A
1
1
2
1
1

Meaning: ASR rules are configured; actions indicate block/audit/warn depending on mapping in your environment.

Decision: If ASR is mostly “audit” while exclusions are broad, you’re collecting evidence of compromise rather than preventing it. Move high-value devices to block where feasible.

Task 5: Pull recent Defender detections (local) to see what it is and isn’t seeing

cr0x@server:~$ powershell -NoProfile -Command "Get-MpThreatDetection | Select-Object -First 5 | Format-List"
ThreatID      : 2147724016
ThreatName    : Trojan:Win32/Wacatac.B!ml
ActionSuccess : True
InitialDetectionTime : 2/5/2026 8:41:12 AM
Resources     : {file:_C:\Users\jdoe\AppData\Local\Temp\q1.tmp}

Meaning: Defender is detecting something in a temp location; good. Now ask if similar artifacts exist under excluded paths.

Decision: If detections are scarce during an active incident, suspect exclusions, telemetry gaps, or a different execution chain (scripts, memory-only, signed abuse).

Task 6: Query Defender Operational event log for exclusion-related clues

cr0x@server:~$ powershell -NoProfile -Command "Get-WinEvent -LogName 'Microsoft-Windows-Windows Defender/Operational' -MaxEvents 20 | Select-Object TimeCreated,Id,Message | Format-Table -AutoSize"
TimeCreated           Id Message
-----------           -- -------
2/5/2026 8:40:58 AM 1116 Microsoft Defender Antivirus detected malware or other potentially unwanted software.
2/5/2026 8:40:59 AM 1117 Microsoft Defender Antivirus took action to protect this machine from malware or other potentially unwanted software.

Meaning: You have basic detection telemetry locally.

Decision: If this log is empty on a machine you know executed suspicious code, check whether logging is disabled, Defender is replaced, or exclusions prevented scanning events from being generated.

Task 7: Validate whether a suspicious folder is excluded

cr0x@server:~$ powershell -NoProfile -Command "$p='C:\Build\'; (Get-MpPreference).ExclusionPath -contains $p"
True

Meaning: The folder is excluded.

Decision: If that folder is used to ingest artifacts from outside (packages, PR builds, downloaded tools), treat it as hostile. Remove or narrow the exclusion and compensate with performance-friendly scanning settings.

Task 8: Check write permissions on an excluded path (is it a drop zone?)

cr0x@server:~$ powershell -NoProfile -Command "icacls C:\Build\"
C:\Build\ BUILTIN\Administrators:(OI)(CI)(F)
         NT AUTHORITY\SYSTEM:(OI)(CI)(F)
         BUILTIN\Users:(OI)(CI)(M)
         CREATOR OWNER:(OI)(CI)(IO)(F)

Successfully processed 1 files; Failed processing 0 files

Meaning: BUILTIN\Users have Modify. On an excluded path, that’s a problem.

Decision: Either remove the exclusion or lock the ACL down so only the necessary service accounts can write. Prefer both when possible.

Task 9: Check for “policy drift” via registry (local policy vs effective)

cr0x@server:~$ powershell -NoProfile -Command "Get-ItemProperty -Path 'HKLM:\SOFTWARE\Policies\Microsoft\Windows Defender\Exclusions\Paths' -ErrorAction SilentlyContinue | Format-List"
C:\Build\ : 0
C:\Tools\Cache\ : 0
PSPath : Microsoft.PowerShell.Core\Registry::HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows Defender\Exclusions\Paths

Meaning: These exclusions are likely coming from policy (GPO/MDM), not a one-off local tweak.

Decision: Fix at the source of truth. Local removal will reappear after policy refresh and you’ll look like you “did nothing.”

Task 10: Determine if Defender is in passive mode (common in EDR coexistence)

cr0x@server:~$ powershell -NoProfile -Command "Get-MpComputerStatus | Select-Object AMRunningMode"
AMRunningMode
------------
Normal

Meaning: Defender is actively protecting (not passive).

Decision: If you see Passive or EDR Block Mode scenarios, map responsibilities: which product enforces scanning, and where do exclusions live? Split-brain security is how you get “covered” by nobody.

Task 11: Capture real-time protection settings that interact with performance tuning

cr0x@server:~$ powershell -NoProfile -Command "Get-MpPreference | Select-Object DisableRealtimeMonitoring,DisableIOAVProtection,DisableArchiveScanning,DisableBehaviorMonitoring | Format-List"
DisableRealtimeMonitoring : False
DisableIOAVProtection     : False
DisableArchiveScanning    : False
DisableBehaviorMonitoring : False

Meaning: Key protections are enabled; performance issues likely drove exclusions rather than global disabling.

Decision: If any of these are disabled broadly, treat as a configuration defect. Re-enable and address performance with targeted exclusions, CPU throttling, or scheduled scans—not by removing core features.

Task 12: Test a controlled scan against a specific folder (to measure impact before changing policy)

cr0x@server:~$ powershell -NoProfile -Command "Measure-Command { Start-MpScan -ScanType CustomScan -ScanPath 'C:\Build\' } | Select-Object TotalSeconds"
TotalSeconds
------------
38.421

Meaning: A custom scan of that folder takes ~38 seconds on this host, now. That’s data.

Decision: Use this to negotiate: maybe you don’t need a full exclusion; maybe you need to exclude only transient subfolders, or only during work hours, or move build artifacts to a controlled location with separate scanning.

Task 13: Verify whether cloud-delivered protection is on (helps catch commodity threats)

cr0x@server:~$ powershell -NoProfile -Command "Get-MpPreference | Select-Object MAPSReporting,SubmitSamplesConsent | Format-List"
MAPSReporting       : Advanced
SubmitSamplesConsent: SendSafeSamples

Meaning: Cloud protection and sample submission are enabled to a reasonable level.

Decision: If this is disabled enterprise-wide “for privacy,” you’re choosing slower, more fragile detection. You’ll compensate with more exclusions (because you’ll get more false positives) and the loop gets worse.

Task 14: Find recently changed Defender settings (rough signal via event logs)

cr0x@server:~$ powershell -NoProfile -Command "Get-WinEvent -LogName 'Microsoft-Windows-Windows Defender/Operational' -MaxEvents 200 | Where-Object {$_.Id -in 5007} | Select-Object -First 5 TimeCreated,Message | Format-List"
TimeCreated : 2/5/2026 7:58:21 AM
Message     : Microsoft Defender Antivirus configuration has changed. If this is an unexpected event you should review the settings...

Meaning: Event ID 5007 indicates configuration change. It won’t always tell you exactly what, but it tells you when.

Decision: Correlate the timestamp with change windows, GPO updates, Intune policy pushes, or suspicious admin activity. If it’s outside change control, escalate.

Short joke #2: The fastest way to speed up Defender is to exclude C:\. Also the fastest way to speed up incident response… because it never ends.

Three corporate mini-stories from the trenches

Mini-story 1: The incident caused by a wrong assumption

The company had a “standard hardening baseline.” It looked good in slides: Defender on, tamper protection on, ASR rules enabled (mostly audit), and “carefully curated” exclusions for developer productivity. Everyone believed the key phrase: carefully curated.

During an investigation into a suspicious admin login, responders found a new scheduled task launching a binary from C:\DevTools\. No alerts. No blocks. The file hash wasn’t in any known-bad set. The machine was otherwise clean. The team’s first assumption was classic: “It must be memory-only malware.” That assumption burned time.

The actual story was more boring. C:\DevTools\ was excluded years ago for a now-retired IDE that performed poorly under on-access scanning. The folder stuck around in the baseline because nobody wanted to be the person who broke dev environments. Over time, the ACL drifted. A standard user group had Modify. The attacker didn’t need a clever bypass. They just needed a writable, excluded directory and a scheduled task.

What hurt wasn’t the exclusion itself. It was the assumption that exclusions were static, safe, and reviewed. The investigation ended with a policy cleanup, ACL lockdown, and an uncomfortable realization: the baseline had never been tested against an attacker model. It was tested against a helpdesk ticket model.

Mini-story 2: The optimization that backfired

A platform team ran a fleet of Windows build agents. Builds were slow after an upgrade, and the metrics showed file IO spikes. The build workspace contained thousands of small files—dependency caches, extracted archives, intermediate objects. Defender was doing what it’s paid to do: inspecting file operations.

The team made a “surgical” change: exclude the entire workspace root. Build time improved immediately. Everyone cheered. The change was rolled out broadly, because nothing succeeds like a graph going down and to the right.

Then came the supply-chain incident. A compromised dependency made it into the build process. It didn’t need to exploit the OS. It didn’t need kernel tricks. It just needed to drop a helper executable into the excluded workspace and execute it during the build. The payload wasn’t even particularly novel. It was simply unobserved where it mattered.

When they rolled back the exclusion, builds slowed again—and now the team had two fires: security and performance. The “optimization” wasn’t wrong to pursue. The mistake was the size of the blast radius and the absence of compensating controls: no isolated build networks, no write restrictions, no scanning at artifact ingress/egress, no separate staging volumes with controlled access.

The fix wasn’t “never exclude.” The fix was “exclude like you mean it”: narrow paths, separate trust boundaries, scan before execution, and design build agents as semi-disposable. Performance came back through better caching strategy and smarter scanning windows, not through blindness.

Mini-story 3: The boring practice that saved the day

A different org had a policy nobody loved: every exclusion required a ticket with (1) a measured performance impact, (2) an owner, (3) an expiry date, and (4) an ACL review of the excluded location. People complained. Of course they did. It was extra work and it slowed “quick fixes.”

One afternoon, an analyst saw a weird pattern: a machine running a legitimate signed tool was repeatedly touching files under an excluded folder used by a line-of-business app. No Defender hits. But the SIEM had file creation events and process execution logs (from other telemetry) that didn’t smell right.

The key detail: the exclusion ticket for that folder had an expiry. It was due for renewal, and renewal required re-justifying the folder’s write permissions. When they checked, they found the app vendor had changed its installer behavior; it widened ACLs during an update. The exclusion had silently become dangerous.

Because the process was boring, it was reliable. They removed the exclusion, replaced it with a narrower subfolder exclusion, and locked down permissions. The suspicious activity turned out to be an early-stage intrusion attempt using commodity tooling. It failed in this environment for a simple reason: someone treated “exclude” as a change with consequences, not as a performance hack.

Common mistakes: symptom → root cause → fix

1) Symptom: “Defender is enabled, but it never alerts on this host”

Root cause: The malware is executing from excluded paths or through excluded processes; Defender is working as configured.

Fix: Audit exclusions; remove broad paths; verify ACLs; add detection telemetry for process creation and script activity; re-scan previously excluded locations with a controlled custom scan.

2) Symptom: “We removed the exclusion, but it keeps coming back”

Root cause: Managed policy (GPO/Intune/ConfigMgr) is reapplying it, or legacy registry “tattoos” remain.

Fix: Identify source of truth (policy paths in registry and device management); change centrally; confirm policy refresh; document ownership and expiry.

3) Symptom: “Only developers are affected; everyone else is fine”

Root cause: Dev tools or build caches got process exclusions (e.g., node.exe, msbuild.exe) or large path exclusions on dev machines.

Fix: Replace process exclusions with path exclusions on non-writable-by-user directories, or isolate caches per-user with scanning at download time; enforce least privilege on tool directories.

4) Symptom: “VDI is slow, so we excluded profiles”

Root cause: Excluding C:\Users\ or profile subtrees to reduce logon storms and IO.

Fix: Don’t exclude whole profiles. Tune scanning schedules, enable performance settings, and target exclusions to known heavy-but-low-risk caches with strict ACLs. Consider profile container solutions with scanning at the container boundary.

5) Symptom: “File server CPU is high, so we excluded shares on clients”

Root cause: Confusion about where scanning should occur; client-side exclusions reduce load but create blind spots for files accessed over SMB.

Fix: Decide scanning responsibility explicitly: scan at ingress (server) and/or at egress (client) depending on risk. If you exclude on clients, ensure server-side scanning and strong controls on write permissions and file screening.

6) Symptom: “We excluded .log and .tmp to reduce noise”

Root cause: Over-broad extension exclusions are easy to abuse; payloads can be stored with misleading extensions.

Fix: Avoid extension exclusions except for tightly controlled, non-executable data types, and even then prefer path-based exclusions with locked ACLs. Reassess any exclusion that matches common “generic” extensions.

7) Symptom: “After installing a vendor app, Defender stopped flagging its folder”

Root cause: Vendor installer added exclusions (sometimes legitimately, sometimes lazily) and they persisted through upgrades.

Fix: Inventory post-install changes (event ID 5007); require vendor justification; narrow exclusions; enforce that vendors don’t exclude writable plugin directories.

8) Symptom: “Incident response scan finds malware, but it wasn’t blocked earlier”

Root cause: Execution happened from excluded area or via excluded process; later scans used different engines/modes, or scanning only happened after IOC was known.

Fix: Treat exclusions as a primary suspect. Align preventive controls (on-access scanning) with detective controls (IR scanning) so you’re not comparing apples to a fruit salad.

Checklists / step-by-step plan

Step-by-step: How to fix exclusions without breaking production

  1. Inventory effective exclusions across representative endpoints (dev, VDI, servers, kiosks). Categorize by type: path/process/extension.
  2. Classify each exclusion by risk:
    • Is the excluded location writable by standard users or broad groups?
    • Does it receive untrusted content (downloads, email attachments, build dependencies, USB imports)?
    • Does it run executables from there?
  3. Find the owner (team or app). If there’s no owner, it’s already a problem.
  4. Add an expiry date for every exclusion. Treat “permanent” as a smell.
  5. Narrow the scope:
    • Prefer excluding a specific cache subfolder over an entire workspace.
    • Prefer excluding a non-executable data directory over a tools directory.
    • Avoid process exclusions unless you can prove they don’t create execution blind spots.
  6. Lock the ACLs on excluded paths. If it’s excluded, it should not be generally writable. If it must be writable, isolate it and scan at the boundary.
  7. Measure performance before and after using repeatable tests (build time, IO metrics, controlled custom scans). No benchmarks, no exclusions.
  8. Roll out in rings: pilot group, then a broader cohort, then full production. Exclusions are a risk multiplier; don’t do big-bang changes.
  9. Monitor signals:
    • Defender operational events (config change, detection events).
    • CPU and disk queue length on endpoints and file servers.
    • Helpdesk tickets tagged to performance and application compatibility.
  10. Document compensating controls when exclusions remain: restricted write access, application allowlisting, scanning on servers, artifact scanning in CI/CD.
  11. Re-audit quarterly or after major upgrades. Drift happens quietly; you need a scheduled way to notice.

Checklist: What an acceptable exclusion request looks like

  • Exact path/process/extension and justification tied to measurable impact.
  • Proof of write permissions and which principals can write.
  • Whether the location can contain executables or scripts.
  • Compensating controls (e.g., scan at download, isolate build agents, artifact signing).
  • Owner and expiry date.
  • Rollback plan and ringed deployment plan.

Checklist: Red flags (treat as security incident, not a tuning request)

  • Excluding C:\, D:\, user profile roots, or whole application roots with plugins/macros.
  • Excluding common tooling processes (powershell.exe, wscript.exe, cmd.exe) or scripting hosts. That’s basically a written invitation.
  • Exclusions with no owner or “we don’t know why, but it’s been there forever.”
  • Exclusions on machines with high privilege (deployment servers, admin jump hosts) without compensating controls.
  • Extension exclusions for types that can carry code in practice (or are easy to rename into).

FAQ

1) Are Defender exclusions always bad?

No. Some are necessary. The problem is unbounded exclusions: too broad, too writable, no owner, no expiry, no compensating controls. Use exclusions like a scalpel, not a shovel.

2) What’s worse: a path exclusion or a process exclusion?

Process exclusions are often worse because they’re harder to reason about. A path exclusion can be contained with ACLs. A process exclusion can create weird “anything this trusted process touches” blind spots, which attackers can sometimes ride.

3) Can attackers add exclusions themselves?

If they have sufficient privileges and tamper protection/policy enforcement isn’t stopping them, yes. But the more common case is simpler: they abuse exclusions you already deployed for performance.

4) If tamper protection is on, are we safe?

Safer, not safe. Tamper protection helps prevent local changes, but it doesn’t fix risky exclusions delivered by policy, and it doesn’t change the fact that an excluded writable folder is a hiding place.

5) We exclude build caches. How do we do it safely?

Make the excluded cache location non-writable by humans, only by the build agent identity. Scan dependencies at download time, scan artifacts at publish time, and keep build agents isolated and rebuildable. Narrow the exclusion to the cache, not the workspace.

6) Should we exclude database files for performance?

Sometimes, but don’t guess. Many database vendors have specific guidance because scanning live database files can cause latency and corruption risk. If you exclude DB data files, compensate by scanning binaries, scripts, backups, and staging directories—where attackers actually drop payloads.

7) Why do exclusions “come back” after we remove them?

Because your environment is managed. GPO/MDM reapply settings, and some legacy policies remain in registry keys until removed centrally. Always fix the policy source, not just the endpoint.

8) How do we detect that an excluded path is being used by malware?

Use other telemetry: process creation logs, file creation auditing, Sysmon (where deployed), EDR sensors, and unusual scheduled tasks/services pointing to excluded directories. Also look for Defender config change events and sudden creation bursts inside excluded paths.

9) Is excluding file extensions ever acceptable?

Rarely. Extension exclusions are too easy to game. If you must, keep it extremely narrow, ensure the directory is non-executable, and confirm you’re not excluding something that can be executed or interpreted indirectly.

10) What’s a safe default policy stance?

Default to no exclusions. Add them only with measured justification, owner, expiry, and ACL review. For performance, prefer tuning scan schedules and narrowing scope over blanket exclusions.

Conclusion: next steps you can do this week

If you run Windows at scale, you already have exclusions. The question is whether they’re controlled engineering decisions or fossilized panic.

  1. Export effective exclusions from a representative sample of endpoints and servers. Make a list humans can read.
  2. Flag the obvious hazards: whole-drive exclusions, user-writable excluded paths, broad process exclusions.
  3. Find the policy source (local vs managed) and assign an owner for each exclusion group.
  4. Lock down permissions on any excluded path that must remain. If it’s writable by “Users,” it’s not an exclusion, it’s an attack surface.
  5. Put expiry dates on everything and schedule a quarterly review. The boring process is the one that still works when staff changes and memory doesn’t.

The best time to fix exclusions is before the incident. The second-best time is right after you finish reading this and before your next “why didn’t Defender catch it?” meeting.

← Previous
Proxmox: Stop “Random” VM Freezes by Fixing One Host Kernel Setting
Next →
Local Admin Accounts: The Hidden Backdoor on Your Own PC

Leave a comment