WSL Is Slow? Fix File I/O with This One Rule

Was this helpful?

When WSL is fast, it’s boring: your builds run, your tests finish, your laptop fans pretend they’re not working overtime. When WSL is slow, it’s personal. A git status takes seconds. npm install feels like it’s walking each file to disk by hand. You start blaming “WSL” as a concept, which is like blaming “roads” for traffic.

Most WSL file I/O pain comes from one self-inflicted wound: you’re doing Linux-y workloads on the Windows filesystem. Fix that, and a lot of “WSL is slow” disappears.

The one rule: don’t cross the filesystem boundary

Rule: Do Linux development work on the Linux filesystem (/home, /, WSL’s ext4 VHDX). Do Windows work on the Windows filesystem (C:\, D:\). Don’t make either OS do heavy file I/O on the other’s filesystem.

Translated into daily behavior:

  • Clone repos into ~/src inside WSL, not into /mnt/c/Users/you/src.
  • Keep node_modules, Python virtualenvs, Rust target dirs, Go build caches, Gradle caches, and anything that sprays thousands of small files inside WSL.
  • If an editor is Windows-native (Visual Studio, Windows Git GUI, Windows Node, etc.), point it at Windows paths. If the tooling runs in WSL, point it at WSL paths.
  • Use VS Code Remote – WSL (or similar) so the editor talks to files where the Linux tools run.

This isn’t ideology. It’s mechanics. Crossing the boundary forces translation layers, permission mapping, metadata emulation, and sometimes antivirus scanning in exactly the worst possible pattern: lots of tiny files and frequent stat calls.

One short joke, because we’re friends: WSL isn’t slow—your files are just commuting across town during rush hour.

Why the boundary hurts: what actually happens on each side

WSL2 is a real Linux kernel inside a lightweight VM

WSL2 runs a real Linux kernel. That’s good for compatibility and for performance inside Linux. But it also means Linux has its own native filesystem (ext4) stored in a virtual disk (a VHDX file) on the Windows side.

Inside WSL, when you read ~/src/app, you’re using Linux’s normal filesystem stack: page cache, ext4 metadata, inode lookups, directory entry caching, etc. It’s the kind of I/O path Linux has spent decades optimizing.

/mnt/c is not “just another folder”

When you access Windows files from WSL, you usually do it via /mnt/c (or /mnt/d, etc.). That path is backed by a Windows filesystem (typically NTFS) and presented to Linux via a special integration layer. That layer has to translate:

  • Linux file permissions into something NTFS can represent (and back)
  • Case sensitivity expectations
  • Symlinks behavior and metadata
  • Filesystem notifications (inotify vs Windows change events)
  • Path semantics and illegal characters

The real killer is the shape of developer workloads. Builds and package managers don’t do one big sequential read. They do millions of small operations: stat(), open(), close(), directory scans, file creation, permission changes. Every one of those calls can cross the boundary.

Antivirus and indexing love to “help” at exactly the wrong time

When files live on the Windows filesystem, Windows Defender (or another endpoint suite) may scan them on create/open/modify. That’s not a moral failing; it’s the job. But developer workloads look suspiciously like malware from a purely mechanical standpoint: thousands of short-lived files, rapidly created, modified, and executed.

Inside WSL’s ext4 VHDX, Defender can’t hook every Linux syscall the same way. The scanning model changes, and the overhead often drops dramatically.

Docker and bind mounts can double your pain

If you run Docker Desktop integrated with WSL, bind-mounting a Windows path into a Linux container makes I/O bounce across layers: container → Linux VM → Windows filesystem → back. If you bind-mount a WSL path (Linux ext4) into the container, you stay mostly inside the Linux stack.

Second short joke (and that’s it): If you bind-mount /mnt/c into a container, you’ve invented a new benchmark called “time to regret.”

Interesting facts and history that explain today’s weirdness

  • WSL1 vs WSL2 is not a minor version bump. WSL1 translated Linux syscalls to Windows syscalls; WSL2 runs a real kernel in a VM, changing I/O behavior and compatibility.
  • WSL2’s Linux filesystem lives inside a VHDX. That’s a virtual disk file stored on Windows, typically under your user profile. The Linux ext4 filesystem is inside that file.
  • Cross-filesystem file metadata is expensive. Linux tooling calls stat() constantly; mapping Windows metadata to Linux semantics is not free, especially across a VM boundary.
  • Case sensitivity is historically different. NTFS supports case sensitivity features, but Windows tooling historically assumed case-insensitive paths. Linux assumes case-sensitive. The compatibility glue has to referee.
  • Inotify is central to modern dev stacks. Webpack, Jest, language servers—many rely on Linux filesystem events. Mapping file change notifications across Windows and Linux layers is notoriously tricky and sometimes slow.
  • Small-file workloads are worst-case for boundary layers. Databases, package managers, and build systems do lots of fsyncs, renames, temp files, and directory churn—exactly what “interop filesystems” hate.
  • Enterprise endpoint protection changed the calculus. As corporate security tightened, Windows-side scanning overhead became more noticeable. The same laptop can “feel fast” or “feel broken” just by moving a repo.
  • WSL’s drvfs mount has performance-sensitive options. Settings like metadata handling can change correctness and speed when using /mnt/c.

Fast diagnosis playbook (find the bottleneck in minutes)

If someone says “WSL is slow,” don’t start by tweaking obscure sysctls. Start by finding which side of the boundary they’re on and what shape the I/O is.

First: identify where the repo lives

  • If the path starts with /mnt/, assume boundary overhead until proven otherwise.
  • If the path is under /home (or another native Linux path), the slowdown is likely elsewhere: CPU contention, memory pressure, antivirus scanning the VHDX file, Docker mount patterns, or something pathological in the toolchain.

Second: identify the workload type

  • Many small files (node_modules, Python site-packages, Git status, ripgrep, language server indexing): boundary overhead dominates.
  • Large sequential files (media processing, model checkpoints): you’re more likely limited by raw disk throughput or CPU.
  • Lots of fsync/rename (databases, package managers): correctness layers and sync semantics dominate.

Third: check for “double translation” patterns

  • Running Linux tools on Windows files (/mnt/c)
  • Running Windows tools on Linux files (\\wsl$ paths accessed heavily by Windows apps)
  • Containers bind-mounting Windows paths into Linux containers

Fourth: measure, don’t vibe

Pick one micro-benchmark that matches the workload: directory walks, file creates, git status, or a build step. Compare Linux-path vs Windows-path. If you see a big delta, the decision is simple: move the files or change the mount strategy.

Hands-on tasks: commands, outputs, and the decision you make

These are real commands you can run inside WSL. Each task includes what the output means and what you should do next.

Task 1: Confirm which WSL version you’re using

cr0x@server:~$ wsl.exe -l -v
  NAME            STATE           VERSION
* Ubuntu-22.04    Running         2

Meaning: VERSION 2 means you’re on WSL2 (good default). VERSION 1 behaves differently, especially for filesystem interop.

Decision: If you’re on WSL1 and need better Linux compatibility or better native Linux filesystem behavior, migrate to WSL2. If your workflow is mostly Windows filesystem access, WSL1 can sometimes feel less bad on /mnt/c—but that’s a niche trade.

Task 2: See where you are (WSL path vs Windows mount)

cr0x@server:~$ pwd
/mnt/c/Users/alex/projects/myapp

Meaning: You’re on the Windows filesystem via /mnt/c.

Decision: If this repo has many small files (most do), move it into ~/src and rerun your slow operation. Expect a night-and-day change.

Task 3: Identify the filesystem type backing your current directory

cr0x@server:~$ df -T .
Filesystem     Type  1K-blocks      Used Available Use% Mounted on
C:             9p    487531516 214221812 273309704  44% /mnt/c

Meaning: Type 9p indicates a paravirtualized file sharing mechanism. This is the boundary path; metadata-heavy workloads often suffer here.

Decision: If you see ext4 under / or /home, you’re on the native WSL filesystem. If you see 9p (or similar), avoid running builds here.

Task 4: Compare “stat storm” behavior on Windows vs Linux paths

cr0x@server:~$ time bash -lc 'for i in {1..2000}; do stat /mnt/c/Windows/notepad.exe >/dev/null; done'

real    0m3.214s
user    0m0.103s
sys     0m0.521s

Meaning: Thousands of metadata ops on /mnt/c cost seconds.

Decision: If your workload looks like this (Git, Node, Python), don’t keep the repo on /mnt/c.

Task 5: Run the same metadata loop on the Linux filesystem

cr0x@server:~$ time bash -lc 'for i in {1..2000}; do stat /bin/bash >/dev/null; done'

real    0m0.152s
user    0m0.092s
sys     0m0.054s

Meaning: Same syscall pattern, much faster on native Linux paths.

Decision: Use this before/after moving a repo to prove the problem is boundary overhead, not “WSL is slow.”

Task 6: Check mount options for /mnt/c (they matter)

cr0x@server:~$ mount | grep " /mnt/c "
C: on /mnt/c type 9p (rw,relatime,dirsync,aname=drvfs;path=C:\;uid=1000;gid=1000;metadata,umask=22,fmask=11)

Meaning: Options like metadata influence how Linux permissions are stored/emulated on Windows files. Some options improve correctness but can cost performance.

Decision: If you must work on /mnt/c, be intentional about mount options via /etc/wsl.conf. If you don’t have a strong reason, don’t optimize this—move the repo instead.

Task 7: Confirm your WSL distro filesystem is ext4

cr0x@server:~$ df -T /
Filesystem     Type  1K-blocks     Used Available Use% Mounted on
/dev/sdc       ext4  263174212 73488328 176211952  30% /

Meaning: ext4 is the native Linux filesystem in WSL2. This is where you want your repo and build artifacts.

Decision: If / is ext4, you already have the fast lane. Put your projects here.

Task 8: Find top I/O offenders during a slow command

cr0x@server:~$ sudo apt-get update -y >/dev/null 2>&1; sudo apt-get install -y strace >/dev/null 2>&1
cr0x@server:~$ strace -f -tt -T -o /tmp/trace.log git status >/dev/null 2>&1
cr0x@server:~$ tail -n 3 /tmp/trace.log
12:01:11.402033 stat("/mnt/c/Users/alex/projects/myapp/.git/index", {st_mode=S_IFREG|0644, st_size=112394, ...}) = 0 <0.011224>
12:01:11.413901 openat(AT_FDCWD, "/mnt/c/Users/alex/projects/myapp/node_modules", O_RDONLY|O_NONBLOCK|O_CLOEXEC|O_DIRECTORY) = 3 <0.019887>
12:01:11.434022 getdents64(3, /* 256 entries */, 32768) = 8192 <0.028441>

Meaning: Calls touching /mnt/c take tens of milliseconds each. Multiply by thousands and you get sadness.

Decision: Move the repo. If you can’t, reduce file count (vendoring, node_modules pruning) or change workflow to avoid boundary-heavy operations.

Task 9: Check memory pressure (paging turns I/O into molasses)

cr0x@server:~$ free -h
               total        used        free      shared  buff/cache   available
Mem:           7.7Gi       6.9Gi       214Mi       118Mi       630Mi       356Mi
Swap:          2.0Gi       1.8Gi       220Mi

Meaning: You’re under memory pressure and using swap. Every file operation competes with paging.

Decision: Close the browser tab farm, reduce parallelism, or adjust WSL memory limits (via .wslconfig on Windows) if your workload legitimately needs RAM.

Task 10: Measure directory traversal speed where your repo lives

cr0x@server:~$ time bash -lc 'find . -type f >/dev/null'

real    0m7.842s
user    0m0.231s
sys     0m1.104s

Meaning: A plain directory walk taking ~8 seconds is a giant red flag for boundary overhead (or an enormous repo, but you’d know).

Decision: Run the same find after moving the repo into ~/src. If it drops by multiples, you’ve found the bottleneck.

Task 11: Check whether your repo accidentally lives on Windows via symlink/path confusion

cr0x@server:~$ readlink -f .
/mnt/c/Users/alex/projects/myapp

Meaning: Even if your shell prompt looks “Linux-y,” this confirms the real backing path.

Decision: If it resolves to /mnt/*, treat it as Windows filesystem access and expect slower metadata I/O.

Task 12: Validate that Git is doing extra work due to filesystem semantics

cr0x@server:~$ git config --show-origin --get core.fscache
file:/home/cr0x/.gitconfig    true
cr0x@server:~$ git config --show-origin --get core.preloadindex
file:/home/cr0x/.gitconfig    true

Meaning: Git’s filesystem cache and preload index can help, but they are not magic if you’re on /mnt/c. They reduce some syscalls; they don’t remove the boundary.

Decision: Keep these enabled for large repos, but don’t use them as an excuse to keep the repo on Windows. Move the repo first.

Task 13: Check whether your slow path is a container bind mount from Windows

cr0x@server:~$ docker info 2>/dev/null | sed -n '1,12p'
Client:
 Version:           26.1.0
Server:
 Containers: 3
  Running: 1
  Paused: 0
  Stopped: 2
 Storage Driver: overlay2

Meaning: Docker is present and likely integrated with WSL. Now the question is where your bind mounts point.

Decision: If your docker run -v uses /mnt/c paths, change it to a WSL path (for Linux containers) and retest.

Task 14: Observe mount sources inside a running container

cr0x@server:~$ docker run --rm -v /mnt/c/Users/alex/projects/myapp:/work alpine sh -lc 'mount | head -n 5'
overlay on / type overlay (rw,relatime,lowerdir=/var/lib/docker/overlay2/l/...,upperdir=/var/lib/docker/overlay2/...,workdir=/var/lib/docker/overlay2/...)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev type tmpfs (rw,nosuid,size=65536k,mode=755,inode64)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=666)
9p on /work type 9p (rw,relatime,trans=fd,rfdno=...,wfdno=...)

Meaning: The container sees the bind mount as 9p. That’s the boundary again, now inside the container.

Decision: For Linux containers, bind mount a path under /home in WSL instead. The container will typically see it as a native Linux filesystem and behave better.

Three corporate mini-stories from the trenches

1) Incident caused by a wrong assumption: “It’s the network”

The ticket came in as a classic: “CI is fine, but local builds are randomly slow.” The team was on Windows laptops using WSL2. The repo was huge, a monorepo with a build system that loved scanning the world to decide what changed. People blamed Wi‑Fi, VPN, DNS—anything that sounded like a shared external dependency.

One engineer escalated it as a “network incident” because pulling dependencies felt slow and git status felt slow. That was the wrong assumption, but it was plausible. In corporate land, plausibility is often enough to waste a week.

We did what SREs do when they’re tired of narratives: we measured. A strace showed the build wasn’t waiting on sockets; it was waiting on stat() and getdents64() calls against /mnt/c. The repo had been cloned into a Windows directory so that a Windows-native IDE could “see it easily.” Every incremental build walked tens of thousands of files across the boundary.

The fix was aggressively boring: clone into ~/src, use Remote WSL for the editor, and stop trying to make one directory serve two operating systems at once. Build times stabilized. The “randomness” vanished because it was never random; it was just cache effects and background scanning amplifying a fundamentally slow path.

The postmortem lesson wasn’t about WSL. It was about assumptions. If you treat local performance as a network problem by default, you’ll keep “fixing” VPNs while your files take the scenic route.

2) Optimization that backfired: “Let’s keep it on C: so backups catch it”

A different org had a compliance-driven directive: developer workstations must back up “work product” daily. Someone decided the easiest way to ensure compliance was to require repos to live under the user’s Windows profile directory. That way the existing Windows backup agent would catch everything without special cases.

On paper, it looked clean. In practice, it was an anti-performance policy. Node and Python projects crawled. Containers with bind mounts into those directories ran like they were using a network drive from 1998. Developers started working around it: copying repos to random places, disabling security controls where they could, or “temporarily” working on unbacked-up scratch drives.

The attempted optimization—simplify backups—created new risk. The security team saw exceptions proliferate. The dev team saw productivity fall. Everyone got grumpy, which is the most reliable metric in operations.

The eventual compromise was smarter: keep source repos inside WSL for performance, and back up via an approved mechanism that can handle the WSL VHDX or export critical repos on a schedule. It required coordination and a tiny bit of engineering effort, which is why it didn’t happen first.

This is the pattern: an optimization that ignores the I/O shape of development will backfire. You can’t policy your way around syscalls.

3) Boring but correct practice that saved the day: “Standardize where code lives”

A platform team got tired of every onboarding session turning into a bespoke performance debugging workshop. They didn’t want heroics; they wanted a default that worked. So they standardized a small set of practices: code goes in ~/src inside WSL, dev containers mount from there, and editors attach via WSL remote.

They also codified two diagnostics into the onboarding doc: run df -T . to confirm you’re on ext4, and run a small find traversal timing test. If either check failed, the setup wasn’t “done,” regardless of whether the app compiled once.

Then came the day a new Windows security agent rollout caused noticeable overhead on file operations in Windows user directories. It could have been chaos: teams missing deadlines, a hundred “WSL is broken” messages, and the usual blame pinball between IT and engineering.

But the teams who followed the boring rule barely noticed. Their hot paths lived in WSL’s ext4, away from the new scanning hooks. Support tickets still came in, but they clustered around people who had deviated from the standard. That’s a nice operational property: the blast radius correlates with noncompliance.

The practice wasn’t clever. It was correct. And correct scales better than clever every time.

Common mistakes: symptom → root cause → fix

This is the stuff I see repeatedly: the same symptoms, the same root causes, the same fixes. If you recognize your pain here, stop experimenting and do the fix.

1) “git status takes forever”

  • Symptom: git status is 2–20 seconds in a medium repo.
  • Root cause: Repo on /mnt/c and Git walking lots of files + boundary metadata overhead.
  • Fix: Move repo into ~/src. Keep Git operations inside WSL. Use VS Code Remote – WSL if you want a Windows UI.

2) “npm install is unbearably slow”

  • Symptom: CPU isn’t pegged, but install time is huge; lots of time “doing nothing.”
  • Root cause: Thousands of small file creates on Windows filesystem; possible antivirus scanning each write.
  • Fix: Keep the project and package cache inside WSL ext4. Avoid installing node_modules on /mnt/c. If corporate policy forces Windows paths, push for Defender exclusions for the relevant directory (carefully and with security sign-off).

3) “My app hot reload doesn’t pick up changes, so I put code on /mnt/c”

  • Symptom: File watchers miss events inside WSL, so someone moved the repo to Windows paths to “fix it.”
  • Root cause: Misconfigured watcher limits or tool-specific file watching issues; moving to Windows traded correctness for performance problems.
  • Fix: Fix watchers in WSL (increase inotify limits, adjust tooling to polling if needed). Keep repo in WSL; don’t relocate it to Windows as a first response.

4) “Docker builds are slow when I mount my source”

  • Symptom: Containerized builds crawl; native builds are less awful.
  • Root cause: Bind mount from /mnt/c into a Linux container; container sees a 9p mount and suffers.
  • Fix: Bind mount from WSL’s ext4 paths. Or copy the source into the image for build steps that do heavy I/O (and accept the tradeoff).

5) “It was fast yesterday”

  • Symptom: Same repo, same commands, sudden slowdown.
  • Root cause: Memory pressure, Windows update/Defender update changing scanning behavior, WSL VM in a bad state, or the repo quietly moved paths (zip extraction into a Windows folder, etc.).
  • Fix: Reconfirm path type (df -T .), check memory (free -h), and try wsl.exe --shutdown from Windows to reset the VM state.

6) “My editor is slow, not the build”

  • Symptom: Search, indexing, and language server features lag.
  • Root cause: Editor is Windows-native scanning \\wsl$ paths heavily, or editor is in WSL scanning /mnt/c. Either way: boundary.
  • Fix: Keep editor and filesystem aligned. Windows editor + Windows files, or WSL tooling + WSL files with remote integration.

Checklists / step-by-step plan

Checklist A: The “make it fast” plan (the default you should follow)

  1. Create a home for repos inside WSL:
    cr0x@server:~$ mkdir -p ~/src
    
  2. Clone your repo inside WSL:
    cr0x@server:~$ cd ~/src
    cr0x@server:~$ git clone git@github.com:example/acme-app.git
    Cloning into 'acme-app'...
    remote: Enumerating objects: 12453, done.
    Receiving objects: 100% (12453/12453), 22.3 MiB | 18.2 MiB/s, done.

    Decision: If cloning here is fast but builds were slow on /mnt/c, you’ve validated the rule.

  3. Run your slow command again (build/test/install) and time it:
    cr0x@server:~$ cd ~/src/acme-app
    cr0x@server:~$ time git status
    On branch main
    Your branch is up to date with 'origin/main'.
    
    real    0m0.187s
    user    0m0.090s
    sys     0m0.070s

    Decision: If it dropped from seconds to sub-second, stop debugging and adopt the rule permanently.

  4. Keep build artifacts inside WSL too. Don’t output to /mnt/c “for convenience.” Convenience is how latency moves in.

Checklist B: If you must use /mnt/c anyway (do it with eyes open)

  1. Confirm you’re on /mnt/c:
    cr0x@server:~$ df -T .
    Filesystem     Type  1K-blocks      Used Available Use% Mounted on
    C:             9p    487531516 214221812 273309704  44% /mnt/c

    Decision: Expect metadata-heavy operations to be slower. Plan mitigations.

  2. Reduce file churn: avoid huge dependency trees where possible, use lockfiles, prune dev dependencies, and keep caches inside WSL if the tool allows it.
  3. Be careful with mount option tweaks. They can trade speed for correctness. If your environment depends on Unix permissions, disabling metadata can break builds in subtle ways.

Checklist C: Container workflows (Docker/Podman) that don’t sabotage you

  1. Keep the source on ext4 (~/src).
  2. Bind mount from WSL path into containers:
    cr0x@server:~$ docker run --rm -v /home/cr0x/src/acme-app:/work -w /work node:20-bullseye bash -lc 'node -v'
    v20.11.1

    Decision: If the mount shows up as a native Linux filesystem inside the container (not 9p), you’re on the right track.

  3. If builds still crawl, avoid bind mounts for the build step: copy source in the Dockerfile and build inside the image. That’s usually faster for heavy I/O and reproducibility, and worse for iteration speed. Choose intentionally.

One quote (paraphrased idea) that matters here

Werner Vogels (paraphrased idea): “Everything fails, all the time—design and operate systems with that reality in mind.”

This is a performance article, but it’s also an ops article: assume the boundary will betray you under load, because it’s a complex integration point. Put your hot path somewhere boring and native.

FAQ

1) What exactly is “the filesystem boundary” in WSL?

It’s the line between Linux-native storage inside WSL (ext4 in the WSL VHDX) and Windows storage mounted into WSL (typically /mnt/c). Crossing it forces translation and often extra scanning.

2) Is WSL2 always faster than WSL1?

No. WSL2 is usually faster for Linux-native filesystem access and compatibility. WSL1 can sometimes feel less painful when operating directly on Windows files because it doesn’t involve the same VM file-sharing path. Most modern dev setups still benefit from WSL2 when you follow the rule and keep code in ext4.

3) Can I keep my repo on Windows and just “tune” WSL?

You can improve things at the margins, but you’re fighting physics: lots of metadata ops across a translation layer. If performance matters, move the repo. Tuning is for constraints, not for preference.

4) What about accessing WSL files from Windows apps via \\wsl$?

Occasional access is fine. Heavy Windows-side scanning, indexing, or building against \\wsl$ paths can reintroduce boundary pain in the other direction. Keep the “heavy tool” on the same side as the files.

5) Why is Node.js (and node_modules) such a common problem?

Because it’s a small-file factory. Installers and bundlers create and traverse enormous directory trees, touching metadata constantly. Boundary filesystems are worst at that exact pattern.

6) Does antivirus really matter that much?

Yes, often. Antivirus is optimized for safety, not for your build graph. When it hooks file open/create events on Windows paths, it can add noticeable latency per operation. Thousands of operations later, your coffee is cold.

7) If I move my repo into WSL, how do I back it up?

Use Git remotes for source-of-truth and treat local clones as disposable when possible. If you need workstation backups, use an approved approach that captures WSL data (or periodically exports repos). Don’t “solve” backups by forcing slow paths into daily development.

8) My company requires repos under my Windows home directory. What’s the least bad option?

Push back with measurements: show the before/after timing and df -T outputs. Propose a compliant alternative (backing up WSL data or mirroring source). If you can’t change policy, reduce file churn, avoid containers binding Windows paths, and keep caches inside WSL where allowed.

9) Is it safe to store important data inside WSL’s ext4 VHDX?

It’s as safe as any local disk data—meaning you still need backups. It’s not inherently unsafe, but it’s also not magic. If your laptop dies, your VHDX dies with it unless you have a plan.

10) What’s the single best test to prove the problem is boundary I/O?

Run the same operation in the same repo once on /mnt/c and once after cloning into ~/src. If the time changes by multiples, it’s boundary overhead. No debate required.

Conclusion: practical next steps

If you remember one thing: keep Linux workloads on Linux filesystems. Clone into ~/src. Build there. Run containers from there. Use remote editor integration so you don’t feel like you’re “giving up” Windows UI.

Next steps you can do today:

  1. Run pwd and df -T . in your repo. If you see /mnt/c and 9p, you’ve found the problem.
  2. Clone the repo into ~/src and rerun your slow command with time. Keep the numbers.
  3. If Docker is involved, stop bind-mounting Windows paths for Linux containers.
  4. Standardize the rule across your team. Performance problems love inconsistency.

WSL isn’t a mystery. It’s an interface between two operating systems with different assumptions. Respect the boundary, and WSL will stop acting like it has somewhere better to be.

← Previous
USB‑C Dock Flicker/Disconnects: Firmware + Driver Order That Works
Next →
Storage: iSCSI vs NFS vs NVMe-oF — What Actually Wins and Why

Leave a comment