Docker on Windows/WSL2 is slow: fixes that actually help

Was this helpful?

If you’re running Docker Desktop on Windows with WSL2 and it feels like your containers are doing everything through wet cement, you’re not imagining it. “npm install” takes geological time. File watchers miss changes or burn CPU. A simple git status can make you question your career choices.

This is fixable. But not with vibes, registry hacks you found in a forum, or toggling random checkboxes until the fan stops screaming. You fix it by identifying which subsystem is actually slow: the filesystem boundary, the VHDX disk, CPU scheduling, memory pressure, DNS, or build cache behavior.

Fast diagnosis playbook

Most teams waste days “tuning Docker” when the bottleneck is a single boundary: Windows files on /mnt/c, or a bind mount into a container, or a VHDX that ballooned and is now doing sad I/O. Here’s the quick path to truth.

First: determine where the slow files live

  • If your repo is under /mnt/c (or any /mnt/<drive>): assume file I/O is your bottleneck until proven otherwise.
  • If your repo is inside the Linux filesystem (~ in WSL2): file I/O is usually fine; next check CPU/memory pressure or container mount patterns.

Second: identify the “slow operation class”

  • Many small files (node_modules, vendor dirs, monorepos): bind mounts and Windows interop will hurt.
  • Database workloads (Postgres, MySQL, Elasticsearch): random I/O + fsync + thin provisioning issues.
  • Build workloads (Docker builds): cache placement and context transfer dominate.
  • Network workloads (pulling images, private registries): DNS and proxy config, not “Docker performance.”

Third: run three tests to localize the bottleneck

  1. WSL filesystem test: create and stat a pile of files under ~ in WSL2.
  2. /mnt test: repeat under /mnt/c.
  3. Container mount test: run the same test inside a container on a bind mount vs a named volume.

If /mnt/c is dramatically slower than ~, stop. Move the repo. If container bind mounts are slower than named volumes, stop. Change the mount strategy. If both are fine, you’re likely CPU/memory constrained or DNS-bound.

Fourth: check for resource starvation

If WSL2 is under-provisioned (or over-provisioned in a way that starves Windows), everything becomes jittery: file watchers, compilers, databases. Fix memory limits and swap, then retest.

Why it’s slow: the real architecture and its traps

Docker Desktop on Windows with WSL2 is not “Docker running on Windows.” It’s Docker running in a Linux VM with a lot of plumbing to make it feel native. That plumbing is where performance goes to develop hobbies.

At a high level:

  • WSL2 is a lightweight VM using Hyper-V under the hood.
  • Your Linux distro (Ubuntu, Debian, etc.) lives on an ext4 filesystem inside a VHDX.
  • Docker Desktop typically runs the daemon in a special WSL2 distro (often docker-desktop).
  • When you access Windows files from WSL2 (like /mnt/c/Users/...), you cross a boundary implemented by a special filesystem layer.
  • When you bind-mount a Windows path into a Linux container, you often cross that boundary again, plus container overlay and mount semantics.

Performance problems usually come from one of these failure modes:

  • Interop filesystem overhead: Linux tooling expects cheap metadata operations. Windows files accessed via WSL’s mount layer can make stat() and directory walks expensive.
  • Too many files: Millions of tiny files are a stress test for any cross-OS file sharing layer.
  • Disk image behavior: VHDX grows easily; shrinking is non-trivial; fragmentation and free-space layout matter more than you want.
  • Memory pressure: WSL2 will cache aggressively; under pressure, you get eviction storms and “why is everything swapping?” moments.
  • Build context transfer: When you build from a Windows path, Docker may spend real time sending your context into the VM.
  • Inotify and watchers: Some toolchains rely on filesystem events; crossing boundaries can break semantics or trigger polling.

One quote to keep you honest: “Hope is not a strategy.” — paraphrased idea attributed to Gen. Gordon R. Sullivan, commonly repeated in reliability circles. Replace “hope” with “random toggles,” and you’re ready for production.

Facts and historical context (short, useful)

  • WSL1 vs WSL2 was a filesystem trade: WSL1 translated Linux syscalls to Windows; WSL2 moved to a real Linux kernel in a VM. Many things got more compatible, but cross-filesystem access became a bigger deal.
  • Docker Desktop used Hyper-V before WSL2 integration matured: early Windows Docker setups ran LinuxKit VMs; WSL2 reduced overhead and improved integration, but didn’t eliminate VM boundaries.
  • ext4-in-VHDX is fast… until you make it weird: it behaves like a normal Linux filesystem, but the backing VHDX file has its own lifecycle (growth, fragmentation, compaction).
  • Metadata-heavy workloads are the common pain: package managers and language toolchains do tons of stat(), readdir(), and short-lived file writes.
  • File sharing semantics differ: case sensitivity, executable bits, symlinks, and permission mapping are not free. “Works” doesn’t mean “works fast.”
  • BuildKit changed the build game: modern Docker builds can cache more intelligently, but only if you stop sabotaging the cache with huge contexts and shifting timestamps.
  • Named volumes are not just for production: they bypass some of the slow host-path sharing layers and can be dramatically faster for hot directories.
  • WSL2 networking is virtualized: DNS and routing can be different from Windows, and corporate VPNs love to get involved.

Do these tasks first: 12+ command-driven checks with decisions

These are real tasks you can run. Each one includes: command, example output, what it means, and what decision to make.

Task 1: Confirm where your repo actually lives

cr0x@server:~$ pwd
/home/cr0x/work/myapp

Meaning: This is inside the Linux filesystem (good). If instead you see /mnt/c/..., that’s a prime suspect.

Decision: If your project is on /mnt/c, plan to move it into ~ and use an editor that supports WSL remote access.

Task 2: Compare small-file performance: WSL home vs /mnt/c

cr0x@server:~$ mkdir -p ~/io-test && cd ~/io-test && time bash -lc 'rm -rf a && mkdir a && for i in $(seq 1 20000); do echo x > a/f$i; done; find a -type f | wc -l'
20000

real    0m3.412s
user    0m1.102s
sys     0m2.101s
cr0x@server:~$ mkdir -p /mnt/c/Users/Public/io-test && cd /mnt/c/Users/Public/io-test && time bash -lc 'rm -rf a && mkdir a && for i in $(seq 1 20000); do echo x > a/f$i; done; find a -type f | wc -l'
20000

real    1m12.883s
user    0m2.004s
sys     0m8.991s

Meaning: If your /mnt/c run is an order of magnitude slower, you’ve found the core problem.

Decision: Keep source + dependency directories in the WSL filesystem. Treat /mnt/c as a convenience mount, not a build farm.

Task 3: Check Docker context and engine details

cr0x@server:~$ docker context ls
NAME                DESCRIPTION                               DOCKER ENDPOINT
default *           Current DOCKER_HOST based configuration   npipe:////./pipe/docker_engine

Meaning: On Windows, the CLI may talk to Docker Desktop via a Windows named pipe, even if the engine runs in WSL2.

Decision: If you’re mixing Windows and WSL CLIs, pick one workflow. Prefer running the Docker CLI inside WSL for fewer boundary crossings.

Task 4: Verify you’re building from Linux paths (context matters)

cr0x@server:~$ docker build -t myapp:ctxcheck .
[+] Building 9.6s (10/10) FINISHED
 => [internal] load build definition from Dockerfile                   0.0s
 => [internal] load .dockerignore                                      0.0s
 => [internal] load metadata for docker.io/library/node:20-alpine      0.9s
 => [internal] load build context                                      6.8s
 => => transferring context: 412.34MB                                  6.6s
...

Meaning: “transferring context” taking seconds is normal-ish for large contexts; if it’s tens of seconds, you’re paying a boundary tax or sending too much.

Decision: Add a real .dockerignore and build from within WSL. If your context is huge, stop shipping node_modules to the daemon.

Task 5: See if BuildKit is enabled (it should be)

cr0x@server:~$ docker buildx version
github.com/docker/buildx v0.12.1

Meaning: Buildx exists; BuildKit is available.

Decision: Use BuildKit features like cache mounts for package managers. If you’re still on legacy builder behavior, expect pain.

Task 6: Measure container bind mount vs named volume performance

cr0x@server:~$ docker volume create voltest
voltest
cr0x@server:~$ mkdir -p ~/bindtest
cr0x@server:~$ time docker run --rm -v ~/bindtest:/work alpine sh -lc 'cd /work; rm -rf a; mkdir a; for i in $(seq 1 20000); do echo x > a/f$i; done; find a -type f | wc -l'
20000

real    0m18.911s
user    0m0.211s
sys     0m0.404s
cr0x@server:~$ time docker run --rm -v voltest:/work alpine sh -lc 'cd /work; rm -rf a; mkdir a; for i in $(seq 1 20000); do echo x > a/f$i; done; find a -type f | wc -l'
20000

real    0m4.022s
user    0m0.203s
sys     0m0.380s

Meaning: If named volumes are much faster, your bind mount path is the bottleneck.

Decision: Put hot directories (deps, caches, DB data) on named volumes. Bind mount only the code you actively edit.

Task 7: Check WSL memory, swap, and pressure

cr0x@server:~$ free -h
               total        used        free      shared  buff/cache   available
Mem:            7.6G        6.8G        220M        120M        650M        390M
Swap:           2.0G        1.4G        600M

Meaning: You’re close to the cliff. Low “available” and active swap means you’ll get latency spikes everywhere.

Decision: Increase WSL2 memory, reduce parallelism, or stop running five databases plus a browser plus Teams. Pick two.

Task 8: Identify hot I/O and processes inside WSL

cr0x@server:~$ sudo apt-get update -qq && sudo apt-get install -y -qq iotop
...
cr0x@server:~$ sudo iotop -o -b -n 3
Total DISK READ: 12.34 M/s | Total DISK WRITE: 45.67 M/s
  PID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND
 2421 be/4  postgres   0.00 B/s   9.12 M/s  0.00 %  85.12 %  postgres: checkpointer
 9911 be/4  cr0x       0.00 B/s   5.44 M/s  0.00 %  62.10 %  node dev-server

Meaning: You can see who’s hammering the disk. Postgres checkpointer doing sustained writes often means fsync + checkpoint settings, or a workload mismatch.

Decision: If databases are dominating, move their data dirs to named volumes and review DB settings for dev (not production) safety/latency tradeoffs.

Task 9: Check filesystem mount options for /mnt/c (metadata behavior)

cr0x@server:~$ mount | grep -E ' /mnt/c '
C: on /mnt/c type 9p (rw,noatime,dirsync,aname=drvfs;path=C:\;uid=1000;gid=1000;metadata;symlinkroot=/mnt/)

Meaning: drvfs/9p options impact behavior. metadata can improve permission semantics but may have overhead; dirsync is often expensive.

Decision: Don’t tune mount flags blindly. If your repo is on /mnt/c, the fix is moving it. If you must stay, consider adjusting automount options, but treat it as a last resort.

Task 10: See whether your VHDX is huge and you’re paying for it

cr0x@server:~$ df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdd        251G  201G   38G  85% /

Meaning: High usage and a large ext4 filesystem can correlate with slower operations, especially if you’re constantly churning dependencies.

Decision: Clean caches, prune Docker artifacts, and consider compacting/shrinking the VHDX periodically (carefully, with backups).

Task 11: Measure DNS behavior from inside a container

cr0x@server:~$ docker run --rm alpine sh -lc 'apk add -q bind-tools; time nslookup registry-1.docker.io'
real    0m1.882s
user    0m0.012s
sys     0m0.008s
Server:         192.168.65.1
Address:        192.168.65.1:53

Non-authoritative answer:
Name:   registry-1.docker.io
Address: 54.161.123.11

Meaning: Multi-second DNS lookups will make pulls/builds “feel slow” even when disk is fine.

Decision: If DNS is slow, fix DNS (WSL resolv.conf generation, corporate DNS, VPN split tunneling). Don’t blame overlay2 for your resolver.

Task 12: Confirm you’re not doing pathological image/volume bloat

cr0x@server:~$ docker system df
TYPE            TOTAL     ACTIVE    SIZE      RECLAIMABLE
Images          42        8         18.4GB    12.7GB (68%)
Containers      19        2         1.2GB     900MB (75%)
Local Volumes   31        6         54.0GB    40.5GB (75%)
Build Cache     0         0         0B        0B

Meaning: Volumes eating 54GB is normal only if you’re running real datasets. In dev, it’s often abandoned DBs and caches.

Decision: Prune what you can, but with intent. If volumes are holding important state, migrate or back up before nuking.

Task 13: Verify whether a container is doing too much fsync (databases)

cr0x@server:~$ docker exec -it my-postgres sh -lc 'psql -U postgres -c "SHOW synchronous_commit; SHOW fsync;"'
 synchronous_commit 
-------------------
 on
(1 row)

 fsync 
-------
 on
(1 row)

Meaning: In dev, synchronous_commit=on and fsync=on is safe but can be slow on virtualized storage paths.

Decision: For dev-only speed, consider relaxing settings (accepting data loss on crashes). For anything that resembles production testing, keep defaults and optimize the storage path instead.

Task 14: Spot file watcher fallback to polling (CPU burn)

cr0x@server:~$ ps aux | grep -E 'watch|chokidar|webpack|nodemon' | head
cr0x     18821  65.2  3.1 1245320 251212 ?      Sl   10:11   5:44 node node_modules/.bin/webpack serve
cr0x     18844  22.7  1.2  981244  96120 ?      Sl   10:11   2:03 node node_modules/chokidar/index.js

Meaning: High CPU from watchers often means they aren’t getting native events and are scanning repeatedly.

Decision: Keep watched directories inside WSL’s ext4, not on /mnt/c. Reduce watch scope. Prefer tooling configs that use inotify effectively.

Fixes that actually move the needle

1) Put your code in the WSL filesystem. Yes, really.

This is the single highest-ROI fix for developer workloads heavy on small files. Keep repos under ~/src inside WSL. Access them from Windows via WSL-aware tooling instead of the reverse.

  • Do: git clone inside WSL.
  • Don’t: keep the repo on NTFS and “just mount it.” That’s how you get 70-second directory walks.

Joke #1: If you insist on running node_modules from /mnt/c, you’re not tuning performance—you’re reenacting a slow-motion disaster movie.

2) Use named volumes for hot data, bind mounts for the edit loop

The mental model that saves you: bind mounts are for source code; named volumes are for churn. Databases, package manager caches, dependency directories, anything that creates thousands of files rapidly—give it a volume.

Example strategy:

  • Bind mount: ./app:/app
  • Named volume: node_modules:/app/node_modules
  • Named volume: pgdata:/var/lib/postgresql/data

Why it helps: named volumes live in the Linux VM’s filesystem layer directly, avoiding slow cross-OS file sharing.

3) Stop shipping garbage in your Docker build context

If your build context is hundreds of MB, you’re paying for a tarball + transfer + extraction step before the first Dockerfile line matters. Your .dockerignore should be aggressive: ignore dependency dirs, build outputs, test artifacts, local caches.

Most teams do this halfway: they ignore node_modules but forget .git, dist, .next, coverage, and language-specific caches. Then they wonder why “load build context” dominates.

4) Use BuildKit cache mounts for package managers

BuildKit can cache directories across builds without baking them into layers. That means faster rebuilds and smaller images. If you’re doing repeated builds on WSL2, this matters.

Example patterns (conceptually): cache the package manager’s download cache, not your application output. Keep the build deterministic.

5) Keep Docker CLI inside WSL when possible

Using Windows CLI to control a daemon in WSL2 isn’t always terrible, but it adds boundaries, environment differences, and path confusion. Running the CLI in WSL reduces friction and makes paths sane.

6) Right-size WSL2 resources (and don’t starve Windows)

WSL2 will happily eat RAM for cache, then release it… sometimes… eventually. If you do nothing, you might get great performance until you don’t. Add a resource policy so your laptop remains a laptop.

Typical strategy: cap memory, set reasonable swap, and avoid giving WSL all cores if Windows needs to stay interactive.

7) Be deliberate about antivirus and indexing

When Windows Defender (or corporate endpoint protection) scans the same files you’re hammering via WSL2 interop, you get “mysterious” slowness. Exclusions can help, but do them with your security team, not in a panic at 2 a.m.

8) Treat database durability settings like a dev/prod contract

If you relax durability (fsync off, async commit), your dev DB will be fast and also capable of losing data when the VM sneezes. That’s fine for local dev; it’s unacceptable for integration tests that claim to mimic production.

Decide which one you’re doing, then configure accordingly.

9) Fix DNS if pulls and connects are slow

In corporate networks, DNS can be the hidden villain. WSL2 has its own resolver setup, Docker has its own internal DNS, and VPN clients love to patch things at runtime.

Rule: if nslookup is slow inside a container, fix DNS first. Don’t touch storage knobs until name resolution is under 100ms for common domains.

10) Keep watchers where inotify works

File watchers are performance multipliers: inotify means event-driven changes; polling means “scan the planet repeatedly.” Put watched trees on ext4 inside WSL2, reduce scope, and configure your tooling to avoid watching vendor directories.

11) Prune and compact intentionally, not as a ritual

Docker artifacts build up. WSL2 disk images grow. Pruning can help, but reckless pruning destroys state and wastes time rebuilding. Establish a cadence: prune build cache weekly, clean abandoned volumes monthly, and compact disk images when they’ve grown massively due to churn.

Joke #2: “Docker prune” is not a performance strategy; it’s a confession that you don’t know what’s using disk.

Three corporate mini-stories from the trenches

1) Incident caused by a wrong assumption: “It’s on my SSD, so it’s fast”

One team I worked with migrated from Mac laptops to Windows laptops for procurement reasons. Docker Desktop + WSL2 looked like the easiest way to keep their dev workflow “the same.” They cloned repositories into C:\Users\..., opened them in their Windows IDE, and let WSL2 and containers do the rest.

Within a week, the symptoms were everywhere: tests timing out, watch mode missing rebuilds, and an uptick in “Docker is broken” tickets. Their logs were clean. Their containers were healthy. But every developer described the same thing: small operations were slow, and the slowdown felt nonlinear, like it got worse during the day.

The wrong assumption was simple: “the code is on an SSD, so file I/O can’t be the bottleneck.” They forgot that the code wasn’t being accessed as raw NTFS by the Linux processes. It was being accessed across a boundary, with translation and metadata semantics glued on top.

We ran a dumb test: create 20,000 tiny files on /mnt/c and in ~. The delta was so large nobody argued about it. They moved repos into WSL, used WSL-aware editor integration, and bind-mounted from the Linux path into containers.

The dramatic part wasn’t the fix. It was how quickly the “Docker is slow” narrative evaporated once the workflow stopped round-tripping through Windows filesystem semantics for every stat().

2) Optimization that backfired: “Let’s put everything on volumes”

Another organization saw bind mounts as “the slow part,” which is often true. So they went all-in: everything moved to named volumes, including source code. Their Compose file created a volume per service, then they used docker cp to sync code into the volume, and ran the app from there.

Performance improved immediately. Builds were faster. Node installs stopped timing out. Victory laps were taken.

Then the backfire: developer experience degraded. Incremental edits didn’t always propagate predictably. Debugging became awkward because the source of truth was now “code in a volume,” not “code in a repo.” A few developers accidentally had stale code living in volumes and spent hours chasing phantom bugs that “nobody else can reproduce.”

Even worse, security scanning and license tooling expected to inspect the repo on disk. They now needed a new pipeline to scan the volume contents, which nobody wanted to own. Performance won, but the workflow became fragile.

The stable compromise was boring: bind mount the source code (from WSL ext4), keep hot churn directories (dependencies, caches, DB data) on named volumes, and keep a single source of truth—your git checkout.

3) Boring but correct practice that saved the day: “Measure the boundary, then standardize”

A platform team supporting multiple product teams saw recurring complaints: “Docker is slow on Windows.” They could have written a wiki page and prayed. Instead, they created a minimal diagnostic script and made it part of onboarding.

It did three things: a small-file creation benchmark under ~ and under /mnt/c, a container bind-mount benchmark, and a DNS lookup timing. It printed results with thresholds and a recommended action (“move repo,” “use named volume,” “check VPN DNS”).

This practice was not glamorous. But it prevented months of low-grade productivity loss. New hires learned the expected workflow on day one: keep repos in WSL, use remote tooling, don’t bind mount Windows paths, don’t blame Docker for DNS.

When performance regressions happened after Windows updates, they had baseline results to compare against. That’s how you avoid treating performance like folklore.

Common mistakes: symptom → root cause → fix

1) “npm install takes forever”

Symptom: installs take minutes; CPU is low; disk activity looks busy but not saturated.

Root cause: dependency tree lives on /mnt/c or is bind-mounted from a Windows path; metadata operations are expensive.

Fix: move repo to WSL ext4; put node_modules on a named volume; ensure build context ignores deps.

2) “File watching is unreliable or burns CPU”

Symptom: hot reload misses changes; fans spin; watchers show high CPU.

Root cause: inotify events aren’t flowing across boundary; tool falls back to polling huge trees.

Fix: keep watched directories in WSL filesystem; reduce watch scope; avoid watching dependency dirs; configure tooling for WSL.

3) “Docker build is slow even with cache”

Symptom: “load build context” dominates; cache misses frequently.

Root cause: enormous context, poor .dockerignore, building from Windows paths; timestamps or generated files churn layers.

Fix: aggressive .dockerignore; build from WSL paths; use BuildKit cache mounts; stabilize inputs.

4) “Database in a container is painfully slow”

Symptom: high latency on simple queries; lots of disk writes; occasional stalls.

Root cause: DB data dir on bind mount crossing Windows boundary; durability settings amplify I/O cost; memory pressure triggers checkpoint churn.

Fix: store DB data on named volume; allocate more memory; for dev-only, consider relaxed durability (with eyes open).

5) “Pulling images is slow; builds hang at apt/apk/npm”

Symptom: network steps stall; retries; works on home Wi‑Fi, fails on VPN.

Root cause: DNS latency or broken resolver under WSL2/Docker; corporate proxy/VPN interactions.

Fix: measure DNS from inside container; adjust DNS configuration; coordinate with IT on split DNS/proxy policy.

6) “Everything was fine, then it got slow over months”

Symptom: gradual decay; disk usage grows; pruning “helps” temporarily.

Root cause: VHDX growth, accumulated images/volumes, dependency churn; possible fragmentation and low free space.

Fix: clean intentionally (volumes/images/build cache); keep free space; compact VHDX as a scheduled maintenance operation.

Checklists / step-by-step plan

Checklist A: Fix the dev workflow (fastest wins)

  1. Move repos into WSL: ~/src is your home base.
  2. Run Docker CLI in WSL: reduce path confusion and boundary crossings.
  3. Bind mount from WSL paths only: never bind mount C:\ paths into Linux containers if you care about speed.
  4. Put churn on volumes: DB data, dependency dirs, package caches.
  5. Harden .dockerignore: reduce context transfer and cache invalidation.
  6. Measure again: rerun the small-file and mount benchmarks so you can prove improvement.

Checklist B: Stabilize WSL2 resource behavior

  1. Pick a memory cap that keeps Windows responsive and WSL productive.
  2. Set swap intentionally (not zero, not infinite). Swap hides problems until it doesn’t.
  3. Watch “available” memory during builds and tests; avoid sustained swap-in/out.
  4. Stop running duplicate stacks: don’t run the same DB in Windows and in containers “just in case.”

Checklist C: Make builds fast and predictable

  1. Enable BuildKit and use cache mounts where appropriate.
  2. Minimize context; if the context is huge, you’re paying tax every build.
  3. Separate dependencies from app code in your Dockerfile to maximize cache hits.
  4. Don’t bind mount build outputs back to Windows unless you need them there; keep hot build artifacts in Linux paths.

Checklist D: Debug networking slowness without guessing

  1. Measure DNS inside container (nslookup timing).
  2. Confirm proxy settings inside build steps (build-time proxy != runtime proxy).
  3. Re-test on/off VPN to isolate corporate network effects.
  4. Fix resolver configuration before changing Docker storage settings.

FAQ

1) Should I use WSL2 or Hyper-V backend for Docker Desktop?

Use WSL2 unless you have a specific compatibility reason not to. The big performance issues usually aren’t “WSL2 vs Hyper-V,” they’re “Windows filesystem paths vs Linux filesystem paths.”

2) Is it safe to keep my repo inside WSL? Will I lose it?

It’s as safe as any local development environment: back it up and push to remote. WSL’s ext4-in-VHDX is stable, but don’t treat “local only” as a durability plan.

3) Why is /mnt/c so much slower for dev tooling?

Because you’re crossing a translation layer with different metadata semantics. Linux tools do lots of small syscalls; the boundary makes each one cost more.

4) Do I need to disable Windows Defender?

No. You may need targeted exclusions for developer directories if your security policy allows it. Coordinate with IT/security; random disabling is how you create exciting incident tickets.

5) Are named volumes always faster than bind mounts?

Not always. Bind mounts from WSL ext4 can be fine. Bind mounts from Windows paths are often the slow ones. Named volumes tend to be consistently fast for churn-heavy directories.

6) Why is my Docker build context transfer huge?

Because you’re sending too much: dependency directories, build outputs, and sometimes even the whole .git history. Fix .dockerignore and keep builds running from WSL paths.

7) My database is slow in a container; should I just install it on Windows?

Sometimes that’s a pragmatic workaround, but it increases divergence from production and complicates tooling. First try: named volume for data + adequate memory + avoid Windows-path mounts.

8) Why does performance degrade over time?

Disk images grow, caches accumulate, volumes pile up, and free space shrinks. Also, toolchains change and generate more files. Treat cleanup and compaction as maintenance, not a panic button.

9) Should I run the IDE in Windows or inside WSL?

Either can work, but the crucial part is where the files live. Windows IDE with WSL remote support is a common sweet spot: Windows UI, Linux filesystem.

10) Is “turn off file sharing” a fix?

It’s a way to stop using slow paths by accident. The real fix is to structure your workflow so you don’t need cross-OS file sharing for hot paths.

Next steps

If you want Docker on Windows with WSL2 to feel sane, do this in order:

  1. Run the benchmarks (Task 2 and Task 6). Don’t debate; measure.
  2. Move the repo into WSL if /mnt/c is slow. This solves the biggest class of pain.
  3. Switch hot directories to named volumes (DB data, dependency dirs, caches).
  4. Fix build context bloat and use BuildKit caching intentionally.
  5. Check DNS if pulls/build steps stall; fix the resolver before you touch storage knobs.
  6. Right-size WSL resources and keep free space healthy; performance loves breathing room.

Do those, and the “Docker is slow on Windows” meme becomes what it should be: an occasional complaint, not a defining feature of your day.

← Previous
ZFS Prefetch: The Hidden Setting Behind Cache Thrash
Next →
Proxmox backups to PBS fail: common chunk/space errors and what to do

Leave a comment