If your containers feel like they’re running through wet cement on Windows, you’re not imagining it. The performance gap between “Docker Desktop” and “Docker in WSL” is real—but it’s also misunderstood. Most people benchmark the wrong thing, then “fix” it by making their setup harder to debug.
I’m going to tell you what’s actually faster, why, and how to prove it on your machine in under an hour—without cargo-culting settings you won’t remember next quarter.
What you’re really comparing (and why the names are misleading)
On Windows, Docker “the product” and Docker “the engine” get tangled. People say “Docker Desktop is slow” when they mean “my bind mounts are slow.” Or they say “WSL Docker is faster” when they accidentally moved their source code into a Linux filesystem and stopped paying the Windows filesystem tax.
Let’s define the two setups in the way performance actually cares about:
- Docker Desktop: a Windows app that runs Docker Engine inside a Linux VM. On modern versions, that VM is usually backed by WSL2. Desktop also adds integrations: UI, credential helpers, networking glue, file sharing paths, extensions, and policy knobs.
- Docker Engine inside WSL2 (“Docker in WSL”): you install and run the Linux Docker Engine directly in your WSL distro. No Desktop app required. Containers run in the same WSL2 VM environment, but you manage the daemon like a normal Linux system.
Notice what’s missing: the actual container runtime doesn’t magically change. Both end up in a Linux kernel context (WSL2’s VM). The difference is where your daemon lives, how file sharing is wired, and how many translation layers you accidentally add.
Here’s the fastest way to think about it:
- If your workload is CPU-bound (compiling, compression, crypto), Desktop vs WSL Engine is usually a rounding error.
- If your workload is filesystem-bound (Node.js installs, hot reloaders, language servers, large monorepos), file placement and mount type dominate everything.
- If your workload is network-bound (lots of localhost services, proxies, VPNs), integration details and NAT paths can make one feel “randomly” worse.
One quote, because it fits: Werner Vogels has a “paraphrased idea” that everything fails, all the time—so you design and operate assuming failure. Performance work is similar: assume your fastest path will degrade unless you keep it simple and observable.
Joke #1: Benchmarking Docker on Windows without checking where your code lives is like timing a sports car while dragging the parking brake. You’ll get a number, sure.
A few facts and historical context you can use in arguments
These aren’t trivia-night facts. They explain why the performance profile looks the way it does.
- Docker on Windows initially leaned heavily on Hyper-V to run a Linux VM, because containers need a Linux kernel. Early setups had noticeably higher overhead and brittle networking.
- WSL1 wasn’t a VM; it translated Linux syscalls into Windows behavior. That was clever, but it wasn’t “real Linux,” and many container behaviors were awkward or impossible.
- WSL2 switched to a real Linux kernel in a lightweight VM, which made Linux container workloads much more compatible—and often faster for Linux-native filesystem operations.
- File performance problems often come from crossing the Windows/Linux filesystem boundary (e.g., accessing
/mnt/cfrom Linux). That boundary has to translate metadata, permissions, and notification semantics. - Bind mounts aren’t inherently slow; bind mounts that traverse a virtualization boundary are. A Linux bind mount inside the Linux filesystem is typically fine.
- Docker Desktop has evolved into a platform, not just a daemon launcher. The extra features are useful, but they add moving parts that can impact performance and debugging.
- BuildKit changed Docker build performance characteristics by improving caching, parallelism, and mount-based build steps. But it also made certain filesystem bottlenecks more visible.
- Windows file change notifications differ from Linux in edge cases; hot reload stacks can behave differently depending on whether events are bridged or polled.
Where speed actually comes from: CPU, disk, mounts, and network
CPU: usually not the deciding factor
CPU-bound workloads (Go/Rust builds, gzip, test runners that don’t thrash the disk) tend to perform similarly between Desktop and WSL Engine because both are executing inside the same WSL2-backed Linux environment in many modern setups. The VM boundary exists either way. The question becomes: did you accidentally add extra layers (like running Docker Desktop but also calling it from Windows paths with heavy bind mounts)?
Disk I/O: where most people lose the week
The single biggest speed lever on Windows container dev is where your project files live and how they’re mounted.
- Best-case: code is stored inside the WSL2 Linux filesystem (your distro’s ext4 in a VHDX), and containers use volumes or bind mounts that stay within Linux. Fast metadata, fast small-file operations.
- Worst-case: code is stored on NTFS (e.g.,
C:\src) and you bind mount it into containers via the boundary (/mnt/c/srcor Desktop file sharing). Small-file heavy workloads get punished.
Why? Because a “simple” operation like “stat 30,000 files” turns into a translation party: metadata mapping, permission mapping, case sensitivity weirdness, and cache invalidations. Node’s node_modules is basically a small-file torture test. So are Python virtualenvs and Rust cargo registries.
Bind mounts vs volumes: your hidden performance contract
Volumes live in Docker’s Linux storage backend (overlay2 on ext4 in WSL2). They’re usually fast and predictable. Bind mounts mirror a host path into the container. If that path is on Windows, you pay for boundary crossing. If that path is inside the WSL2 distro filesystem, bind mounts can be fine.
Translated into advice: keep your dependencies inside Linux (volumes) even if your source has to be on Windows. Or better: keep both in Linux and use editor integration to work with WSL paths.
Networking: mostly fine until VPNs and localhost enter the chat
WSL2 uses NAT’d virtual networking. Docker adds its own virtual network plumbing. Docker Desktop adds more integration to make localhost behave like people expect on Windows.
Performance issues in networking usually show up as:
- mysterious latency spikes when a corporate VPN is connected
- DNS resolution delays inside containers
- port-forwarding inconsistencies across Windows ↔ WSL ↔ container
Desktop tends to “just work” more often for port publishing to Windows. WSL Engine can be cleaner for Linux-to-Linux service meshes inside WSL, but you might do more manual work to make Windows tools see the services.
Memory and page cache: the quiet reason one setup “feels” faster
Linux page cache is a performance superpower. If your workload repeatedly reads the same dependency trees, caching helps a lot. But WSL2’s memory behavior (dynamic allocation and reclaim) can make performance feel inconsistent if the VM is starved or constantly ballooning. Desktop adds its own resource limits and UI knobs, which can be helpful—or can throttle you if set too low.
Who’s faster for what: a blunt decision matrix
There isn’t one universal “faster.” There is a faster-for-your-bottleneck. Here’s the practical take.
If you want maximum speed for filesystem-heavy dev
Pick Docker Engine inside WSL2, and keep your repos inside the WSL filesystem (\\wsl$ path from Windows; actual storage under the distro VHDX). Use Linux-native tools. Bind mount from Linux paths, not /mnt/c.
If you want the most predictable developer experience in a mixed Windows shop
Pick Docker Desktop, especially if you need GUI management, credential helpers, corporate proxies, Kubernetes toggles, or fewer “why doesn’t localhost work” tickets. But still: keep your project in WSL if you care about speed.
If your bottleneck is builds, not live-reload
Either setup can be fast. What matters is BuildKit, caching strategy, and avoiding pointless invalidation (copying your whole repo early in the Dockerfile, or rebuilding dependencies every edit).
If you’re on a corporate VPN that breaks DNS
Desktop often has better knobs and integration. WSL Engine can be “purer” but you may end up debugging resolv.conf and Windows DNS proxy behavior at 4:55 PM.
Joke #2: Nothing accelerates a “let’s optimize Docker” meeting like someone saying “it works on my laptop.” Then everyone’s laptop becomes evidence.
Practical tasks: commands, expected output, and what decision to make
These are real checks I’d run before believing anyone’s benchmark. Each task includes: command, what the output means, and what you do next.
Task 1: Confirm where Docker is running (Desktop vs WSL Engine)
cr0x@server:~$ docker context ls
NAME DESCRIPTION DOCKER ENDPOINT ERROR
default * Current DOCKER_HOST based configuration unix:///var/run/docker.sock
desktop-linux Docker Desktop unix:///var/run/docker.sock
Meaning: Contexts tell you which daemon you’re talking to. If you see desktop-linux, you’re likely using Desktop’s daemon integration.
Decision: If you’re trying to compare, explicitly switch contexts and re-run tests. Don’t “think” you switched; prove it.
Task 2: Identify the daemon and storage driver
cr0x@server:~$ docker info --format '{{.ServerVersion}} {{.Driver}} {{.OperatingSystem}}'
27.3.1 overlay2 Docker Desktop
Meaning: Overlay2 on Linux is typical. If you don’t see overlay2, something unusual is happening. If it’s not Linux, you’re not in the intended path.
Decision: If the storage driver is unexpected, stop. Your performance investigation will be fiction until you fix that.
Task 3: Measure whether your project is on Windows or WSL filesystem
cr0x@server:~$ pwd
/mnt/c/src/myapp
Meaning: /mnt/c is Windows NTFS mounted into WSL. If your project is here and you’re bind mounting it, you’re likely paying the boundary tax.
Decision: If you care about speed, move the repo into Linux (e.g., ~/src/myapp inside WSL) and re-test before changing anything else.
Task 4: Compare metadata-heavy ops (the “node_modules test” without installing anything)
cr0x@server:~$ time find . -type f | wc -l
48219
real 0m3.214s
user 0m0.112s
sys 0m0.988s
Meaning: This approximates small-file traversal cost. On /mnt/c, this can be dramatically slower than inside Linux.
Decision: If sys time is high and real is much higher than you expect, your bottleneck is filesystem translation. Fix placement/mounts, not CPU.
Task 5: Compare bind mount performance inside a container
cr0x@server:~$ docker run --rm -v "$PWD":/work -w /work alpine sh -lc 'time find . -type f >/dev/null'
real 0m4.02s
user 0m0.07s
sys 0m1.21s
Meaning: This measures the bind mount path from container view. If it’s much slower than running find on the host filesystem directly, the mount bridge is the culprit.
Decision: If slow, switch to Linux-side repo or use Docker volumes for dependency-heavy dirs.
Task 6: Confirm whether your container writes are going to a volume or a bind mount
cr0x@server:~$ docker inspect -f '{{range .Mounts}}{{.Type}} {{.Source}} -> {{.Destination}}{{"\n"}}{{end}}' myapp
bind /mnt/c/src/myapp -> /app
volume myapp_node_modules -> /app/node_modules
Meaning: A hybrid setup is often best: source via bind mount, dependencies via volume.
Decision: If node_modules or target or .venv lives on a Windows-backed bind mount, move it to a volume.
Task 7: Check WSL version and distro state (from within WSL)
cr0x@server:~$ wsl.exe -l -v
NAME STATE VERSION
* Ubuntu-22.04 Running 2
Meaning: You want WSL2 for performance and kernel compatibility. WSL1 is a different beast.
Decision: If you’re on WSL1, migrate. Don’t tune around it.
Task 8: Check whether Docker Desktop is the daemon you’re talking to (WSL integration can blur it)
cr0x@server:~$ ps -ef | grep -E 'dockerd|containerd' | grep -v grep
root 2660 1 0 10:14 ? 00:00:02 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
Meaning: If you see dockerd running inside the WSL distro, you’re likely running Engine in WSL. If not, you might still be using Desktop’s socket integration.
Decision: Decide who owns the daemon: Desktop or systemd in WSL. Avoid half-and-half ownership if you want predictable debugging.
Task 9: Check systemd state in WSL (relevant for running Docker Engine properly)
cr0x@server:~$ ps -p 1 -o comm=
systemd
Meaning: If PID 1 is systemd, you can manage Docker like a normal Linux service. If it’s not, you may be using alternative init methods.
Decision: If you want WSL Engine stability, enable systemd and run Docker under it.
Task 10: Measure build performance (BuildKit on, cache working)
cr0x@server:~$ DOCKER_BUILDKIT=1 docker build -t myapp:bench .
[+] Building 18.2s (12/12) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 1.23kB 0.0s
=> [internal] load metadata for docker.io/library/node:20-alpine 0.8s
=> [1/6] FROM docker.io/library/node:20-alpine@sha256:... 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2.31kB 0.0s
=> [2/6] WORKDIR /app 0.1s
=> [3/6] COPY package*.json ./ 0.1s
=> [4/6] RUN npm ci 14.9s
=> [5/6] COPY . . 1.5s
=> [6/6] RUN npm test 0.8s
=> exporting to image 0.0s
Meaning: The slow step is obvious. If COPY . . is slow, your context transfer and filesystem path are bad. If npm ci is slow, you’re I/O bound or network bound.
Decision: If COPY is slow, fix repo placement and .dockerignore. If dependency install is slow, move cache directories to volumes and ensure your DNS/proxy isn’t throttling.
Task 11: Observe container CPU throttling or host contention
cr0x@server:~$ docker stats --no-stream
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
c3a1d2e9f0b1 myapp 225.41% 612.3MiB / 7.6GiB 7.86% 12.4MB / 9MB 1.2GB / 38MB 34
Meaning: If CPU is pegged and you’re still slow, you might be compute-bound, or you’re being throttled by assigned CPUs.
Decision: If CPU is high and build steps are compute-heavy, allocate more CPUs/memory to WSL2/Desktop or reduce parallelism in toolchains.
Task 12: Check disk usage and where Docker stores its data
cr0x@server:~$ docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 27 6 9.812GB 5.104GB (52%)
Containers 14 2 1.102GB 812.4MB (73%)
Local Volumes 19 7 22.44GB 8.201GB (36%)
Build Cache 45 0 3.993GB 3.993GB
Meaning: Volume bloat and build cache bloat can turn into “my disk is slow” because your VHDX grows and your host disk gets fragmented or pressured.
Decision: If reclaimable is high, prune build cache/unused volumes. If the host disk is nearly full, stop pretending this is a Docker problem.
Task 13: Check DNS latency inside containers (network “slow” often means DNS “slow”)
cr0x@server:~$ docker run --rm alpine sh -lc 'time nslookup registry-1.docker.io >/dev/null'
real 0m0.42s
user 0m0.01s
sys 0m0.01s
Meaning: If DNS takes seconds, package installs and pulls will crawl. VPNs and split-DNS can trigger this.
Decision: If slow, focus on DNS configuration (WSL resolv.conf behavior, Desktop DNS proxy settings, corporate DNS rules), not Docker flags.
Task 14: Measure bind mount event behavior for hot reloaders (inotify vs polling)
cr0x@server:~$ docker run --rm -v "$PWD":/work -w /work alpine sh -lc 'apk add --no-cache inotify-tools >/dev/null && inotifywait -t 2 -e modify . || echo timeout'
Setting up watches.
Watches established.
timeout
Meaning: If you don’t get events reliably, your hot reload tool may fall back to polling, which burns CPU and “feels slow.”
Decision: If events don’t trigger across your mount, move code into WSL filesystem or configure the tool to use polling with sane intervals (and accept the CPU cost).
Fast diagnosis playbook (first/second/third checks)
This is the order that finds the bottleneck fastest in real teams. Not theory. Triage.
First: identify the boundary you’re crossing
- Is your repo under
/mnt/c? - Are you bind mounting Windows paths into containers?
- Are your dependency directories living on Windows-backed mounts?
If yes: expect slow metadata operations. Fix file placement/mount strategy before touching anything else.
Second: decide who owns the daemon and keep it consistent
- Are you using Docker Desktop’s daemon via integration, or a dockerd you run inside WSL?
- Do you have both installed and occasionally switch without realizing?
If inconsistent: your measurements will vary. Pick one, document it, and enforce it with contexts or team scripts.
Third: isolate build vs runtime vs network
- Build slow? Inspect
docker buildoutput: is itCOPY, dependency installs, or tests? - Runtime slow? Check
docker stats, disk write patterns, and filesystem event behavior. - Pull/install slow? Check DNS latency and VPN/proxy behavior.
Fourth: check resource caps and host contention
- Is WSL/Desktop limited to 2 CPUs and 2GB RAM because someone “optimized” it?
- Is your host disk nearly full or under heavy antivirus scanning?
If yes: you’re not tuning Docker, you’re tuning the host environment.
Three corporate mini-stories from the trenches
Incident: the wrong assumption that “WSL = Linux speed”
A mid-sized company rolled out Windows laptops for a dev team that previously used Linux workstations. The plan was simple: “Use WSL2, run Docker, everything is basically Linux again.” The first week, engineers complained that test runs and dependency installs were painfully slow, but only for some repos.
The team assumed the problem was Docker Desktop overhead and started uninstalling Desktop, switching to Docker Engine inside WSL, and tweaking daemon flags. It got… slightly better. Not good.
The real culprit was boring: the repos lived on C:\ because corporate backup and endpoint protection policies were tied to Windows paths. Devs edited code in Windows tools, and WSL accessed it via /mnt/c. Their container bind mounts crossed the boundary, and the workload was a perfect storm of small-file metadata operations.
The fix wasn’t “switch Docker.” It was a policy exception and a workflow change: store repos inside WSL for active development, keep “exported” artifacts on Windows, and use IDE WSL integration. The same Docker setup suddenly looked “faster” because the filesystem stopped being a translator for every syscall.
Optimization that backfired: starving the VM to “save battery”
Another team wanted laptops to run cooler during travel. Someone decided to clamp WSL2/ Docker resources: fewer CPUs, low memory, aggressive reclaim. On paper, it reduced fan noise. In practice, it turned builds into a slot machine.
Symptoms were weird: first build after reboot was okay, second build crawled, then it got better, then it stalled again. Engineers blamed caches, then blamed Docker, then blamed each other’s branches. Classic.
The hidden behavior was memory pressure. With too little RAM, the Linux page cache couldn’t do its job. Meanwhile, repeated compiles and dependency scans churned the filesystem cache and triggered swap-like behavior inside the VM. The CPU limits amplified the pain: less parallelism, longer wall time, more time for background processes to collide.
They reverted to sane limits, then made battery savings explicit and opt-in. Moral: “optimization” that removes headroom tends to create jitter, and jitter is what makes engineers distrust every other metric.
Boring but correct practice that saved the day: pinning the workflow and measuring one thing at a time
A larger org had a mix of Desktop users and WSL Engine users. Performance complaints kept arriving, but every ticket was un-actionable: “Docker is slow.” Helpful.
An SRE-minded engineer created a short internal “Docker on Windows contract”: where repos live, which daemon is standard, how to mount dependencies, and a minimal benchmark script that measures file traversal, build time, and DNS latency separately. It was not glamorous. It was a checklist.
When an outage-like incident hit—developers couldn’t pull images and pipelines slowed—the team used the script and immediately saw DNS latency spikes inside containers. That pointed to a corporate DNS change interacting with VPN split tunneling. Without the baseline tests, they would have wasted days “tuning Docker.”
They fixed DNS policy, not Docker. The boring practice was consistency and measurement discipline, and it paid for itself the first time something went weird at scale.
Common mistakes: symptom → root cause → fix
1) “npm install takes forever in containers”
Symptom: dependency installs are 5–20× slower than expected; CPU is low; disk activity is constant.
Root cause: dependencies are being written to a Windows-backed bind mount (/mnt/c), causing slow metadata operations.
Fix: store repo in WSL filesystem, or keep node_modules in a Docker volume and mount only source code.
2) “Hot reload doesn’t trigger, so the dev server polls and burns CPU”
Symptom: file changes sometimes don’t propagate; fans spin; reload is delayed.
Root cause: inotify events don’t bridge cleanly across certain mount paths; tool falls back to polling.
Fix: move repo into WSL filesystem; configure the watcher to use polling with reasonable intervals if you must stay on Windows paths.
3) “Docker build is slow at COPY”
Symptom: COPY . . takes seconds to minutes; context transfer is heavy.
Root cause: massive build context from poor .dockerignore; repo on Windows path; many small files.
Fix: improve .dockerignore; relocate repo; reorder Dockerfile to maximize caching (copy manifests first, then install, then copy source).
4) “Pulls and apt installs hang randomly on VPN”
Symptom: image pulls stall; package installs wait on name resolution.
Root cause: DNS inside containers is slow or broken due to VPN split-DNS and NAT.
Fix: measure DNS latency; adjust DNS configuration for WSL/Desktop; coordinate with IT for correct resolver behavior.
5) “It’s fast for me but slow for new hires”
Symptom: some machines are fine, others terrible, same repo.
Root cause: inconsistent daemon ownership (Desktop vs WSL Engine), inconsistent file locations, inconsistent resource caps.
Fix: standardize: one supported setup, one repo location recommendation, one baseline diagnostic script.
6) “Disk space disappears and everything gets slower over time”
Symptom: builds slow down; disk nearly full; Docker reports lots of reclaimable data.
Root cause: build cache and volumes grow; WSL VHDX expands; host disk pressure increases.
Fix: prune unused images/volumes/build cache; monitor host disk; avoid keeping endless old layers locally.
Checklists / step-by-step plan
Plan A: you want the fastest dev loop on Windows
- Move active repos into WSL filesystem:
~/srcinside your distro. - Use IDE support for WSL so editing still feels native.
- Use Docker Engine inside WSL2 if you want minimal layers, or Desktop if you need corporate-friendly integration—but keep the files in WSL either way.
- Mount source via Linux path. Keep dependency directories in volumes.
- Measure: file traversal time, container traversal time, build step timings, DNS latency.
Plan B: you must keep code on C:\ due to corporate policy
- Accept that
/mnt/cis slower for small files. Don’t fight physics with Slack messages. - Use volumes for heavy dependency directories:
node_modules,.venv,target,vendor, tool caches. - Tune watch behavior: prefer polling with sane intervals rather than “hope inotify works.”
- Improve
.dockerignoreto reduce context transfer. - Keep an eye on antivirus/endpoint protection exceptions (with security approval). Those scanners love chewing on dependency trees.
Plan C: you’re optimizing builds, not live dev
- Turn on BuildKit; confirm caching works.
- Reorder Dockerfile for cache hits: copy lockfiles first, install deps, then copy source.
- Minimize build context with
.dockerignore. - Separate network issues from disk issues with a DNS timing check.
- Use multi-stage builds where it reduces final image size without increasing rebuild time.
FAQ
1) Is Docker Desktop always slower than Docker Engine in WSL2?
No. For CPU-bound workloads they’re often similar. The big wins usually come from keeping files inside the WSL filesystem and avoiding Windows-backed bind mounts.
2) What’s the single biggest performance improvement?
Move your repo from C:\ (accessed as /mnt/c) into the WSL distro filesystem, then bind mount from there. It removes a translation layer from hot paths.
3) Why are bind mounts slow on Windows?
Bind mounts that cross Windows ↔ Linux boundaries must translate filesystem semantics. Small-file metadata ops get expensive. Volumes avoid that by staying in Linux storage.
4) Can I keep using Windows editors if my repo is in WSL?
Yes. Most modern editors have WSL integration. The key is: store the files in Linux, edit via a bridge designed for it, and let containers access them without crossing NTFS.
5) Should I uninstall Docker Desktop if I run Docker Engine in WSL?
If you don’t need Desktop features, uninstalling can reduce confusion and background complexity. If your org depends on Desktop integration (proxies, credential store, support), keep it—but be explicit about which daemon you use.
6) Why does my performance change after reboot or sleep?
WSL2 VM lifecycle, page cache warmth, and dynamic memory allocation all affect “feel.” Cold caches make everything look worse. Resource caps can make it inconsistent.
7) Is Kubernetes in Docker Desktop a performance problem?
It can be, mainly by consuming CPU/RAM and adding background churn. If you’re not using it, turn it off. If you are, budget resources like an adult system, not like a demo.
8) Are Docker volumes always faster than bind mounts?
Not always. Volumes are usually faster than Windows-backed bind mounts. But a bind mount inside the WSL filesystem can be perfectly fine and more convenient for dev workflows.
9) How do I know if the bottleneck is DNS?
Time an nslookup inside a container. If it takes seconds, your “slow pulls” and “slow installs” are probably resolver/VPN/proxy issues.
10) What if my team uses Compose and it’s slow?
Compose amplifies filesystem and watch problems because it often mounts multiple services from the same repo. Fix mounts and file location first, then look at per-service resource usage.
Next steps you can actually do this week
- Pick a baseline: decide whether your standard is Docker Desktop or WSL Engine. Standardize contexts and scripts so people can’t “accidentally” compare different daemons.
- Move one repo into WSL and re-run two measurements:
findtraversal time on host and in a bind-mounted container. If you see a big drop, you’ve found your main lever. - Fix mounts strategically: keep source editable, but keep dependency trees in volumes. It’s the best compromise when policy forces Windows paths.
- Run the fast diagnosis playbook on the next “Docker is slow” complaint. If you can’t say whether it’s filesystem, DNS, or resource caps, you’re not diagnosing—just narrating.
- Document the boring rules (repo location, mounts, DNS check, build cache hygiene). This is how you avoid repeating the same performance investigation every onboarding cycle.
If you want a single opinionated recommendation: put your code in WSL2’s Linux filesystem and stop bind-mounting NTFS into containers unless you have to. Docker Desktop vs WSL Engine is secondary; the filesystem boundary is the boss fight.