WSL2 + VPN: Why It Breaks (and How to Fix It)

Was this helpful?

You connect to the corporate VPN, open your WSL2 shell, and suddenly nothing works. Git can’t reach the internal repo.
Curl hangs. DNS returns lies. Your app can hit the internet or the intranet, but never both.

This isn’t you being cursed. It’s the predictable outcome of layering a Linux VM (WSL2) behind Windows NAT, then letting a VPN client
rewrite routes and DNS like it owns the machine. Which, to be fair, it sort of does.

The mental model: why WSL2 and VPNs collide

WSL2 is not a compatibility layer in the old sense. It’s a lightweight VM with a real Linux kernel.
Networking-wise, it behaves like a machine sitting behind a small NAT gateway implemented by Windows.
Your Linux distro gets an address in a private range (typically 172.16.0.0/12 or 192.168.0.0/16-ish),
and Windows does translation and forwarding.

VPN clients, on the other hand, are professionally intrusive. They install virtual adapters, inject routes,
override DNS, enforce policy, and sometimes block “non-corporate” forwarding paths on purpose.
The VPN usually assumes it’s dealing with one host network stack. WSL2 is a second stack behind it.
Now you’ve got two routing domains and at least three places DNS might be “helpfully” rewritten:
Windows, the VPN client, and WSL’s auto-generated /etc/resolv.conf.

Where the packets actually go

When a process in WSL2 connects to git.internal.corp, it does:

  • Linux app asks Linux resolver for DNS.
  • Linux resolver consults /etc/resolv.conf (often pointing to a Windows-side resolver IP).
  • Traffic leaves the WSL VM to the Windows host via the WSL virtual switch.
  • Windows decides which interface/route wins: VPN adapter, Wi‑Fi/Ethernet, or “nope”.
  • VPN client may NAT, encrypt, or block forwarding depending on policy.

The breakage usually falls into one of four buckets:
routing, DNS, MTU/fragmentation, or policy/firewall.
You can fix all four, but you have to stop guessing which bucket you’re in.

Joke #1: VPN clients are like cats—independent, territorial, and convinced they’re the only thing in the house that matters.

The uncomfortable truth

Some VPN policies intentionally make WSL2 hard. Not because WSL2 is evil, but because it is effectively a second machine.
Security teams worry about unmanaged Linux environments pivoting into internal networks.
So the “fix” might not be a tweak; it might be “get your VPN team to allow it” or “use a sanctioned dev VM”.
Still, in many companies, it’s just misconfiguration and defaults fighting defaults.

Fast diagnosis playbook (check 1/2/3)

When WSL2 + VPN breaks, don’t start by reinstalling WSL or changing ten settings at once.
Do the following in order. Each step narrows the fault domain.

1) Is it DNS or routing?

  • If ping 10.x.x.x works but ping git.internal.corp doesn’t, it’s DNS.
  • If DNS resolves to the right IP but TCP can’t connect, it’s routing/firewall/MTU.
  • If only some internal subnets work, it’s split tunnel route injection.

2) Compare Windows vs WSL behavior

  • If Windows can reach internal resources but WSL can’t, the problem is the NAT/forwarding boundary or DNS handoff.
  • If Windows can’t reach them either, stop blaming WSL. Fix VPN connection, routes, or corporate DNS first.

3) Identify the interface that should win

  • Full tunnel: default route should go via VPN. DNS should be corporate.
  • Split tunnel: only specific internal prefixes go via VPN; default stays on local internet.
  • Either way: WSL’s traffic must be allowed to traverse Windows to that interface.

4) Check MTU last (but don’t forget it)

MTU problems look like “DNS works, ping works, HTTPS hangs” or “small requests work, large ones stall”.
If you see that pattern, test MTU before you waste an hour arguing with routing tables.

Facts and history: why this is a recurring mess

  • WSL1 and WSL2 are fundamentally different networks. WSL1 shared the Windows stack; WSL2 is NATed behind a VM.
  • WSL2 uses a Hyper‑V virtual switch under the hood. Even on Windows Home, the plumbing behaves like a mini Hyper‑V setup.
  • Many VPN clients still ship kernel drivers. They hook deeply into Windows networking to enforce policy and routes.
  • DNS “split horizon” is common in enterprises. The same hostname resolves differently inside vs outside the VPN, making stale resolvers disastrous.
  • NRPT and per-interface DNS are a Windows thing. Windows can send different DNS queries to different servers depending on domain rules; Linux typically doesn’t unless configured.
  • MTU pain is older than WSL. VPN encapsulation reduces effective MTU, and PMTUD is still fragile in real networks.
  • Default route precedence is not universal. Windows route metrics and VPN “force tunnel” settings often surprise people coming from Linux.
  • Corporate security baselines often block IP forwarding. Even if Windows can reach the VPN, it may refuse to forward traffic from a “virtual” NIC.
  • WSL2 localhost access changed over time. Recent Windows builds improved localhost forwarding between Windows and WSL, but VPNs can still interfere.

Failure modes mapped to root causes

Failure mode A: “DNS is broken in WSL, but Windows is fine”

Most common. WSL autogenerates /etc/resolv.conf pointing at a Windows-side resolver IP.
When you connect VPN, Windows DNS servers change. WSL doesn’t always update cleanly, or it updates to a resolver that can’t reach the
corporate DNS due to interface binding.

Failure mode B: “Internal IPs time out from WSL, but resolve correctly”

That’s typically routing or policy. Windows can route to the internal subnet via VPN adapter,
but WSL traffic might not be allowed to traverse that path (firewall, VPN client policy, or missing route to return traffic).

Failure mode C: “Split tunnel: one subnet works, another doesn’t”

Split tunneling relies on pushing many internal routes. Some VPN clients push only to Windows and assume local processes are the only source.
WSL traffic may use a different source address range (the WSL NAT subnet), and the corporate side might not have a return route.

Failure mode D: “HTTPS stalls, git clone hangs, but ping works”

MTU / fragmentation. Encapsulation shrinks MTU; some paths drop ICMP “fragmentation needed” messages; TCP blackholes.
It’s boring, it’s ancient, and it still burns teams weekly.

Failure mode E: “Everything works until sleep/resume”

Resume often changes interface ordering, route metrics, or the VPN client’s internal state.
WSL keeps running, still holding old DNS or routes, while Windows has moved on.

Hands-on tasks: commands, outputs, and decisions (12+)

These tasks are designed to be run exactly when things are broken. Each one tells you something specific,
and tells you what to do next. Run both Windows-side and WSL-side checks; the delta is the clue.

Task 1 (WSL): What IP and default gateway do I have?

cr0x@server:~$ ip -br addr
lo               UNKNOWN        127.0.0.1/8 ::1/128
eth0             UP             172.26.112.45/20

cr0x@server:~$ ip route show
default via 172.26.112.1 dev eth0
172.26.112.0/20 dev eth0 proto kernel scope link src 172.26.112.45

What it means: WSL is on 172.26.112.0/20 and uses 172.26.112.1 as a gateway (Windows NAT).
Decision: If the gateway is missing or the interface is DOWN, your WSL networking is broken before the VPN even enters the story.
Restart WSL (wsl --shutdown from Windows) and try again.

Task 2 (WSL): Is DNS pointing somewhere sane?

cr0x@server:~$ cat /etc/resolv.conf
# This file was automatically generated by WSL. To stop automatic generation of this file, add the following entry to /etc/wsl.conf:
# [network]
# generateResolvConf = false
nameserver 172.26.112.1

What it means: WSL is using the Windows gateway as a DNS proxy/resolver.
Decision: If corporate DNS is failing, this is your first suspect. Either Windows’s resolver is wrong for your VPN state,
or WSL is stuck with stale DNS. Proceed to Task 3 and 4.

Task 3 (WSL): Can I resolve an internal name, and what server answered?

cr0x@server:~$ resolvectl status 2>/dev/null | sed -n '1,120p'
Global
       Protocols: -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
resolv.conf mode: foreign
Current DNS Server: 172.26.112.1
       DNS Servers: 172.26.112.1

What it means: Your resolver is the WSL gateway. That can work, but only if Windows is forwarding queries appropriately.
Decision: If internal domains don’t resolve, test direct DNS to corporate servers next (Task 4) to separate “WSL resolver” from “DNS reachability”.

Task 4 (WSL): Query corporate DNS directly (bypass the proxy)

cr0x@server:~$ dig @10.20.30.40 git.internal.corp +time=2 +tries=1

; <<>> DiG 9.18.24-1ubuntu1.3-Ubuntu <<>> @10.20.30.40 git.internal.corp +time=2 +tries=1
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached

What it means: WSL cannot reach the corporate DNS server IP directly.
Decision: This is routing/policy, not a resolver file problem. Go to Task 6 (routing) and Task 10 (firewall/policy hints).

Task 5 (Windows in WSL shell): Can I compare with Windows DNS quickly?

cr0x@server:~$ powershell.exe -NoProfile -Command "Resolve-DnsName git.internal.corp | Select-Object -First 1"
Name                                           Type   TTL   Section    IPAddress
----                                           ----   ---   -------    ---------
git.internal.corp                              A      60    Answer     10.50.12.34

What it means: Windows can resolve the name.
Decision: Now you know: DNS works on Windows, but WSL can’t use it. Fix is usually WSL DNS integration (see fixes section: “pin resolv.conf” or “use systemd-resolved properly”).

Task 6 (WSL): What route would I use to reach an internal IP?

cr0x@server:~$ ip route get 10.50.12.34
10.50.12.34 via 172.26.112.1 dev eth0 src 172.26.112.45 uid 1000

What it means: WSL will send internal traffic to the Windows gateway. That’s expected.
Decision: If Windows then doesn’t forward it into the VPN, you’re dealing with a Windows routing/policy/firewall problem or a VPN client that blocks forwarding from WSL’s vNIC.

Task 7 (WSL): Can I reach the internal IP at all?

cr0x@server:~$ ping -c 2 10.50.12.34
PING 10.50.12.34 (10.50.12.34) 56(84) bytes of data.

--- 10.50.12.34 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1027ms

What it means: No ICMP reachability. Not definitive (ICMP may be blocked), but it’s a strong signal.
Decision: Try TCP connectivity next. If TCP also fails, it’s routing/policy. If TCP works and ping fails, it’s just ICMP policy.

Task 8 (WSL): Test TCP to a known internal port

cr0x@server:~$ nc -vz -w 2 10.50.12.34 443
nc: connect to 10.50.12.34 port 443 (tcp) timed out: Operation now in progress

What it means: TCP can’t get through.
Decision: Move up the stack: check Windows routes and VPN adapter behavior. If Windows can connect but WSL can’t, suspect VPN client restrictions on forwarding/NAT.

Task 9 (Windows via WSL): What does Windows think the routes are?

cr0x@server:~$ powershell.exe -NoProfile -Command "Get-NetRoute -AddressFamily IPv4 | Sort-Object -Property RouteMetric | Select-Object -First 8"
ifIndex DestinationPrefix NextHop     RouteMetric ifMetric PolicyStore
------- ----------------- -------     ----------- -------- -----------
19      0.0.0.0/0         10.8.0.1    5           25       ActiveStore
3       0.0.0.0/0         192.168.1.1 35          25       ActiveStore
19      10.0.0.0/8        0.0.0.0     5           25       ActiveStore
19      172.16.0.0/12     0.0.0.0     5           25       ActiveStore
19      192.168.0.0/16    0.0.0.0     5           25       ActiveStore
1       127.0.0.0/8       0.0.0.0     256         75       ActiveStore

What it means: The VPN (ifIndex 19) is winning default route and internal RFC1918 ranges.
Decision: In full tunnel, this is expected. In split tunnel, it might be wrong and will break local network access.
Either way, if WSL can’t traverse this, the VPN client may be blocking forwarding or Windows firewall is filtering the WSL vEthernet interface.

Task 10 (Windows via WSL): Is the WSL vEthernet profile/firewall hostile?

cr0x@server:~$ powershell.exe -NoProfile -Command "Get-NetAdapter -Name 'vEthernet (WSL)' | Format-List -Property Name,Status,MacAddress,LinkSpeed"
Name       : vEthernet (WSL)
Status     : Up
MacAddress : 00-15-5D-4A-2B-1C
LinkSpeed  : 10 Gbps

cr0x@server:~$ powershell.exe -NoProfile -Command "Get-NetConnectionProfile -InterfaceAlias 'vEthernet (WSL)' | Format-List"
Name             : Network
InterfaceAlias   : vEthernet (WSL)
NetworkCategory  : Public
IPv4Connectivity : Internet

What it means: The WSL virtual NIC is categorized as Public. Many corporate firewalls clamp down hard on Public profiles.
Decision: If policy allows, set it to Private. If policy doesn’t allow, you need explicit firewall rules or a sanctioned approach.

Task 11 (WSL): Check MTU and test for blackholing

cr0x@server:~$ ip link show dev eth0 | sed -n '1,3p'
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether 00:15:5d:4a:2b:1c brd ff:ff:ff:ff:ff:ff

cr0x@server:~$ ping -c 2 -M do -s 1472 10.50.12.34
PING 10.50.12.34 (10.50.12.34) 1472(1500) bytes of data.
ping: local error: message too long, mtu=1500

cr0x@server:~$ ping -c 2 -M do -s 1360 10.50.12.34
PING 10.50.12.34 (10.50.12.34) 1360(1388) bytes of data.

--- 10.50.12.34 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1018ms

What it means: First ping shows your interface MTU logic. Second shows path still fails—so it’s not just “too big packets”.
Decision: If smaller sizes succeed but larger stall, clamp MTU (or MSS) on the WSL side or fix PMTUD. If both fail, return to routing/policy.

Task 12 (WSL): Trace where packets stop (basic)

cr0x@server:~$ traceroute -n -w 1 -q 1 10.50.12.34
traceroute to 10.50.12.34 (10.50.12.34), 30 hops max, 60 byte packets
 1  172.26.112.1  0.240 ms
 2  * 
 3  * 
 4  *

What it means: Traffic reaches the Windows gateway, then disappears. That’s the boundary.
Decision: Stop tweaking Linux routes. You need to make Windows+VPN forward traffic properly, or adjust the VPN posture to support WSL.

Task 13 (WSL): See if a proxy is expected (common in corporate VPNs)

cr0x@server:~$ env | grep -iE 'http_proxy|https_proxy|no_proxy'
https_proxy=http://proxy.internal.corp:8080
http_proxy=http://proxy.internal.corp:8080
no_proxy=localhost,127.0.0.1,.internal.corp

What it means: Your environment expects an HTTP proxy.
Decision: If direct TCP to internal endpoints fails but proxy access works on Windows, you may need consistent proxy vars inside WSL (or use a PAC solution via Windows). Also ensure no_proxy includes internal domains that shouldn’t be proxied.

Task 14 (WSL): Confirm what git actually tries to do

cr0x@server:~$ GIT_CURL_VERBOSE=1 GIT_TRACE=1 git ls-remote https://git.internal.corp/team/repo 2>&1 | sed -n '1,40p'
20:17:14.123456 git.c:460               trace: built-in: git ls-remote https://git.internal.corp/team/repo
20:17:14.130000 http.c:725              == Info:   Trying 10.50.12.34:443...
20:17:16.132000 http.c:725              == Info: connect to 10.50.12.34 port 443 failed: Connection timed out
fatal: unable to access 'https://git.internal.corp/team/repo/': Failed to connect to git.internal.corp port 443 after 2002 ms: Timeout was reached

What it means: Resolution succeeded; connect timed out.
Decision: Don’t touch DNS. Fix routing/forwarding/policy or use the corporate proxy if required.

Fix patterns that actually hold up

There are dozens of forum “fixes” for WSL2+VPN. Half are cargo cult. A quarter are security policy violations.
The rest can be made reliable if you choose the right pattern for your environment: full tunnel, split tunnel, proxy-only, or “VPN client forbids it”.

Fix pattern 1: Stabilize DNS inside WSL (stop chasing moving targets)

If Windows resolves internal names but WSL doesn’t, you usually want WSL to use the corporate DNS servers directly,
or use a stable local forwarder that actually follows VPN state.

Option A: Pin resolv.conf (simple, blunt, effective)

Disable auto-generation and set explicit DNS servers.
This works best when corporate DNS IPs are stable while connected to VPN.

cr0x@server:~$ sudo tee /etc/wsl.conf >/dev/null <<'EOF'
[network]
generateResolvConf = false
EOF

cr0x@server:~$ sudo rm -f /etc/resolv.conf
cr0x@server:~$ sudo tee /etc/resolv.conf >/dev/null <<'EOF'
nameserver 10.20.30.40
nameserver 10.20.30.41
search internal.corp
options timeout:2 attempts:2
EOF

What the output means: No output is fine; you’re writing files.
Decision: Restart WSL (wsl --shutdown) so it re-reads config. If this fixes internal resolution but breaks public DNS off-VPN,
you need conditional DNS or a resolver that switches with VPN.

Option B: Use systemd-resolved properly (less blunt, more correct)

On newer WSL builds, systemd can be enabled. When it’s working, it gives you a real resolver daemon,
and you can point it at DNS servers with proper caching and fallback behavior.
The catch: you must keep the Windows/VPN DNS story consistent, or you’ll just fail faster.

If you’re not already using systemd in WSL, don’t enable it solely to fix VPN DNS unless you can support the change.
It’s good. It’s also another moving part. Production rule: add complexity only when it buys you stability.

Fix pattern 2: Split tunnel reality check (routes must exist both ways)

Split tunnel is where optimism goes to die. Your Windows box gets routes to 10.0.0.0/8 and friends via VPN.
WSL sends packets to Windows gateway. Windows forwards into VPN. Great.
Then the corporate side receives packets sourced from 172.26.112.45 (WSL’s private NAT range) and has no idea how to reply.

In full tunnel, this often “accidentally” works because the VPN client NATs outbound traffic and makes it look like it’s coming from the Windows host’s VPN address.
In split tunnel, NAT behavior varies wildly. Some clients NAT, some don’t, some do it only for Windows processes, and some refuse for “virtual adapters”.

Your fix options:

  • Best: VPN client supports NAT/forwarding for WSL traffic (or is configured to).
  • Also valid: Corporate network has routes back to the WSL NAT subnet (rare; requires network team buy-in and strict controls).
  • Workaround: Use a proxy/jump host/bastion that Windows can reach, and tunnel from WSL over that single allowed path.

Fix pattern 3: Make Windows firewall stop sabotaging the vEthernet (WSL) interface

If the WSL adapter is “Public” and your corporate endpoint firewall is strict, WSL traffic may be blocked from reaching the VPN adapter.
This is common on hardened laptops.

If you are allowed to, set the WSL virtual NIC profile to Private. If you’re not allowed, add narrow firewall rules instead of flipping profiles broadly.
Your security team will appreciate “narrow and auditable” more than “I toggled random stuff until it worked”.

Fix pattern 4: Fix MTU (or MSS) when TCP stalls

VPN encapsulation eats MTU. If you have an underlying MTU of 1500 and the VPN adds overhead,
your effective path MTU might be 1400-ish or lower. If PMTUD fails (because ICMP is filtered somewhere),
large TCP segments blackhole and connections hang in ways that look mystical.

The pragmatic fix is to clamp MTU on the WSL interface or clamp TCP MSS.
You’re trading a bit of throughput for stability. In corporate networks, stability is the only currency accepted.

cr0x@server:~$ sudo ip link set dev eth0 mtu 1400
cr0x@server:~$ ip link show dev eth0 | sed -n '1,2p'
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether 00:15:5d:4a:2b:1c brd ff:ff:ff:ff:ff:ff

What it means: MTU is now 1400.
Decision: Retest the hanging workflow (git clone, curl, whatever). If it fixes stalls, make it persistent (via distro network config),
and document why. If it changes nothing, revert and keep diagnosing.

Fix pattern 5: Treat proxies as first-class citizens

Many corporate VPN environments are not “route anywhere you want”. They are “route to proxy, then proxy”.
If Windows has proxy auto-config, but WSL does not, WSL will fail even though the laptop “works”.

In that environment, the right fix is to configure proxy variables in WSL consistently and maintain no_proxy.
Don’t hardcode this into random shell startup files with mystery conditions; use a managed approach (profile scripts),
and keep it visible. Secrets belong in credential stores, not in .bashrc.

Joke #2: Debugging VPN networking is like watching a magic show, except the rabbit is your packet and it never comes back.

Fix pattern 6: When the VPN client forbids WSL, stop fighting it

Some VPN clients enforce “no traffic from virtual adapters” or similar. Sometimes this is explicit policy; sometimes it’s an implementation artifact.
If your diagnosis consistently shows packets die at the Windows gateway and your VPN client is known to lock down forwarding,
you have three sane options:

  • Use WSL1 (if it fits your workload) because it shares the Windows network stack and often avoids the NAT boundary entirely.
  • Use a sanctioned dev VM on the VPN (managed by IT) or a remote development host inside the network.
  • Use an SSH bastion reachable from Windows and connect through it (port-forwarding) from WSL; treat it as a controlled egress.

The unsane option is “install a sketchy driver” or “disable the endpoint firewall” to make it work.
That’s how you get your laptop quarantined, and then your week gets worse.

A reliability principle worth keeping

Hope is not a strategy. — James Cameron

Apply it here: stop hoping the route table “looks fine” and prove every layer with targeted tests.

Common mistakes: symptom → root cause → fix

1) Symptom: “WSL can’t resolve internal domains; Windows can”

Root cause: WSL /etc/resolv.conf points to a resolver that doesn’t follow VPN DNS changes, or Windows DNS is per-interface and WSL’s DNS proxy path misses NRPT rules.

Fix: Pin DNS inside WSL (disable generateResolvConf) to corporate DNS servers when on VPN, or implement a resolver approach that follows VPN state reliably.

2) Symptom: “DNS resolves, but TCP to internal IP times out from WSL”

Root cause: VPN client blocks forwarding from the WSL vEthernet adapter, or Windows firewall profile blocks it.

Fix: Check WSL adapter network category; add narrow firewall exceptions; if VPN policy forbids it, use WSL1 or a sanctioned dev host.

3) Symptom: “Split tunnel: some internal subnets work, others don’t”

Root cause: Missing routes from VPN client, overlapping local networks, or return path not known for WSL NAT range.

Fix: Verify Windows routes for each prefix; avoid local LAN ranges that overlap corporate RFC1918; push correct routes via VPN profile; ensure NAT behavior is consistent.

4) Symptom: “After sleep/resume, WSL loses connectivity until restart”

Root cause: Interface metric changes or stale resolver state in WSL; VPN reconnect changes DNS/route ordering.

Fix: Automate wsl --shutdown after VPN reconnect (or at least document it). Prefer stable DNS configuration instead of relying on auto-generated resolv.conf.

5) Symptom: “HTTPS stalls; small requests work; ping works”

Root cause: MTU blackhole due to VPN encapsulation and broken PMTUD.

Fix: Lower MTU on WSL interface (or clamp MSS) and retest. If it works, make it persistent and note the VPN overhead.

6) Symptom: “Local services on Windows aren’t reachable from WSL while on VPN”

Root cause: VPN forces routes or firewall policies that interfere with local subnet/localhost forwarding; sometimes the VPN toggles firewall rules aggressively.

Fix: Use explicit host IPs, validate Windows firewall rules, and consider binding services to the right interface. For development, prefer connecting via localhost when supported, but verify it after VPN connect.

Checklists / step-by-step plan

Checklist A: You need internal DNS + internal TCP from WSL (full tunnel VPN)

  1. Confirm Windows can resolve and connect to the internal service.
  2. In WSL, check /etc/resolv.conf; test dig to corporate DNS directly.
  3. If Windows works and WSL doesn’t, pin WSL DNS to corporate resolvers or fix the DNS proxy path.
  4. Verify Windows routes: VPN default route and internal prefixes should go via VPN adapter.
  5. Check WSL adapter firewall profile; change to Private if allowed or add narrow rules.
  6. If TCP stalls in weird ways, test MTU and clamp if needed.
  7. Document the final state: DNS servers, MTU, and what breaks when off-VPN.

Checklist B: You need split tunnel (internet local, corp subnets via VPN)

  1. List the exact internal prefixes you need (don’t guess; get them from the VPN profile or network team).
  2. On Windows, confirm those prefixes have routes via VPN adapter (Task 9 approach).
  3. In WSL, verify traffic to each prefix goes to the Windows gateway (Task 6) and doesn’t detour.
  4. Test reachability to an internal IP per prefix (Task 8).
  5. If it fails, suspect return path/NAT behavior: see if corporate side expects traffic from the Windows VPN IP only.
  6. Decide: (a) VPN client supports WSL forwarding/NAT; (b) corporate routes back to WSL NAT subnet; or (c) use proxy/bastion.
  7. Stabilize DNS: internal domains must resolve to internal IPs only while on VPN; avoid mixing public and private resolvers.

Checklist C: You’re on a locked-down corporate endpoint

  1. Assume you cannot change firewall profile, install drivers, or add routes persistently.
  2. Prove where packets die (Task 12 traceroute to internal IP).
  3. If the drop is at Windows gateway and Windows itself can reach the target, it’s likely policy against forwarding from virtual adapters.
  4. Stop fighting the laptop. Switch strategy: WSL1, remote dev host, or bastion with port forwards.
  5. Get the exception documented if WSL2 is a business need; “it’s annoying” is not a business need.

Three corporate mini-stories (anonymized, plausible, and painfully familiar)

Mini-story 1: The incident caused by a wrong assumption

A team rolled out a new internal package mirror during a migration. Developers used WSL2 for builds.
The mirror lived on an internal subnet reachable only via VPN.
Windows laptops could reach it. The team assumed that meant “developers can reach it,” full stop.

The first Monday after rollout, build times went from normal to catastrophic. Not because the mirror was down.
Because WSL2 could not reach it, so tooling fell back to public mirrors and retries. Some builds succeeded slowly; some failed; everyone blamed the mirror anyway.
The on-call got paged for “registry outage” that wasn’t an outage.

The failure mode was clean in hindsight: Windows had the VPN routes; WSL2 traffic died at the host boundary.
The VPN client enforced a policy that didn’t allow forwarding from virtual adapters. Windows processes were fine; WSL2 was not.
Nobody checked the assumption early because “it works on my machine” was true—just not in the same network stack.

The fix wasn’t a clever route hack. It was a decision: for this company’s security posture, WSL2 would not get direct VPN access.
They shipped a sanctioned Linux VM image for builds and kept WSL2 for local-only workflows. It was less convenient.
It also stopped the weekly “why is apt broken” Slack archaeology.

Mini-story 2: The optimization that backfired

Another org tried to “speed up VPN” by enforcing split tunneling aggressively. Internet traffic stayed local; only internal prefixes went to VPN.
It reduced load on their concentrators and made video calls less miserable. Everybody celebrated.

Then the dev tooling started failing in weird ways. Containers in WSL2 would reach some internal services but not others.
DNS sometimes returned internal IPs, sometimes public ones. Git over HTTPS hung intermittently.
It wasn’t random. It was deterministic chaos.

The optimization introduced two coupling points: route completeness and DNS consistency.
Their split tunnel routes didn’t include every internal dependency, especially newly added ones. And their DNS strategy relied on Windows NRPT rules,
which WSL2 didn’t honor when it used a generic resolver path.
The result: resolution succeeded, but the route didn’t exist, so connections timed out. Developers “fixed” it with hosts files and hardcoded IPs.
Which worked until the next renumbering.

The rollback wasn’t total. The team kept split tunnel for most staff, but created a “developer” VPN profile:
either full tunnel or a much broader set of prefixes plus stable DNS. Less elegant, more bandwidth, fewer mystery outages.
They also banned hosts-file fixes for internal services, not out of purity, but because it was operational debt with teeth.

Mini-story 3: The boring but correct practice that saved the day

A platform team had a habit that looked like bureaucracy: every time a developer reported “VPN broke WSL,”
the first response wasn’t advice—it was a tiny checklist of command outputs to paste.
WSL: ip route, cat /etc/resolv.conf, dig internal domain.
Windows: route table snippet and DNS server list. Same request, every time.

People complained at first. “Why do you need all that? It’s obviously DNS.” It was not always DNS.
The checklist made them stop arguing with their own hunches.
After a month, patterns emerged: one VPN client version broke forwarding; one Wi‑Fi driver update changed metrics; one endpoint firewall policy update reclassified the WSL NIC.

When a real incident hit—an internal repo unreachable from WSL across a department—those baseline outputs made triage fast.
They could say: Windows resolves and connects, WSL resolves but cannot connect, traceroute dies at gateway, and the WSL NIC is Public.
That narrowed the blast radius from “network is down” to “endpoint policy regression”.

The eventual fix was dull: a policy adjustment to allow specific outbound flows from the WSL adapter into the VPN adapter,
plus a documented workaround for MTU clamping when on a certain ISP. Nobody wrote a blog post about it.
But the next time it happened, the on-call slept.

FAQ

1) Why does WSL2 break on VPN when WSL1 didn’t?

WSL1 shared the Windows network stack. WSL2 is a VM behind a NAT boundary. VPN clients and firewalls that handle the Windows stack
don’t automatically forward traffic for a second stack.

2) Should I switch to WSL1 just for VPN compatibility?

If your workload tolerates WSL1 (filesystem performance and kernel features can be limiting), it’s a legitimate fix.
It avoids the NAT boundary and often behaves like “normal Windows networking”.

3) Is the problem always DNS?

No. DNS is common, but plenty of failures are routing/policy: traffic reaches the Windows gateway and dies because the VPN client blocks forwarding
or the firewall profile is hostile.

4) Why does it work until I reconnect VPN or resume from sleep?

VPN reconnect and resume can reorder routes, change DNS servers, and change interface metrics.
WSL may keep old resolver configuration, and the Windows/VPN state changes underneath it.

5) How do I know if it’s MTU?

If name resolution works and small requests succeed, but HTTPS downloads, git clones, or API calls hang or stall, suspect MTU/PMTUD.
Clamp MTU in WSL temporarily and retest the failing workflow.

6) Can I just add routes inside WSL to fix split tunnel?

Usually no. WSL routes still go through the Windows gateway. If Windows/VPN won’t forward, Linux-side route tweaks won’t change policy.
Focus on Windows routes and VPN behavior first.

7) Why can Windows reach internal IPs but WSL can’t?

Windows processes originate from Windows interfaces and are handled by the VPN client as expected.
WSL traffic originates from the WSL vEthernet/NAT subnet; some VPN clients treat that as “forwarded traffic” and block it.

8) What’s the cleanest enterprise-friendly solution?

A sanctioned development environment inside the corporate network: managed VM, VDI, or remote dev host.
If local WSL2 must be used, negotiate a documented policy exception and implement narrow firewall rules rather than broad profile changes.

9) Does enabling systemd in WSL fix VPN issues?

Sometimes it helps with DNS because you get a real resolver service, but it doesn’t magically bypass VPN policy or firewall rules.
Use it when you can support it operationally, not as a superstition.

Practical next steps

Do three things, in this order:

  1. Classify the failure: DNS vs routing/policy vs MTU. Use the fast diagnosis steps and the tasks above.
  2. Pick a stable fix pattern: pin DNS, adjust firewall/profile, clamp MTU, adopt proxy, or switch strategy (WSL1/remote dev host).
  3. Make it repeatable: document the expected state (DNS servers, VPN mode, MTU) and keep a copy-paste checklist for the next outage.

The goal isn’t “make it work on your laptop today.” The goal is “make it boring next month.”
WSL2 is excellent. VPNs are necessary. Getting them to cooperate is less about clever commands and more about admitting where the boundary really is.

← Previous
The ‘Clean Boot’ Trick: Isolate the Bad App in Under 10 Minutes
Next →
USB Controller Passthrough That Actually Stays Stable (IOMMU + Interrupt Remapping)

Leave a comment