WireGuard Windows Won’t Connect: 10 Fixes That Solve Most Cases

Was this helpful?

WireGuard on Windows is usually boring—in the best way. Then one day it just… doesn’t connect.
The tunnel shows “Active,” but nothing routes. Or you get a handshake once an hour like it’s sending postcards, not packets.
Or DNS dies and suddenly your “VPN issue” looks like the entire internet evaporated.

This is the practical, production-systems guide I wish more teams shipped with their configs:
ten fixes that solve the majority of real incidents, plus a fast diagnosis playbook, concrete commands, and the failure modes nobody admits to.

Fast diagnosis playbook (check 1–2–3)

When WireGuard on Windows “won’t connect,” you need to stop treating it as one problem.
It’s usually one of three bottlenecks: handshake, routing, or name resolution.
Find which one in under five minutes, then go deeper.

1) Is there a handshake?

  • On Windows: WireGuard UI → pick tunnel → look at “Latest handshake”.
  • On server: check wg show for “latest handshake” and RX/TX counters.

No handshake is usually a port/firewall/NAT/key problem.
Handshake exists but traffic fails is usually routes, AllowedIPs, NAT/forwarding, or MTU.

2) If handshake exists: can you ping an IP (not a name)?

  • Ping the server’s WireGuard interface IP (example: 10.6.0.1).
  • Ping an internal resource IP (example: 10.0.10.25).

If IP ping works but names don’t, it’s DNS. If neither ping works, it’s routing/AllowedIPs/firewall/MTU.

3) If you can ping IPs: does the route table look sane?

On Windows, wrong routes cause silent misery: traffic goes out your Wi‑Fi, not the tunnel, while the tunnel proudly claims it’s “Active.”
Check routes and interface metrics before you blame WireGuard.

Facts & context that explain the weirdness

  • WireGuard is UDP-only. If your network blocks outbound UDP or rate-limits it aggressively, you’ll see intermittent handshakes and “ghost” connectivity.
  • There’s no “session” to keep alive. WireGuard is stateless-ish: it sends encrypted packets; state is mostly key material and handshake timestamps. NAT devices, however, love state.
  • PersistentKeepalive exists mostly to appease NAT. It’s not a performance feature; it’s a survival tactic for clients behind stateful firewalls.
  • AllowedIPs is routing policy. It’s not an ACL in the traditional sense; it tells the peer what to encrypt and where to route.
  • Windows uses interface metrics to pick a route. A “split tunnel” can accidentally become a “no tunnel” if another interface wins the route decision.
  • WireGuard’s design was intentionally small. Fewer moving parts means fewer failure modes—unless your environment provides the missing complexity via NAT, DNS, and corporate security tooling.
  • Handshake timestamps can mislead you. A recent handshake doesn’t prove the return path works for the traffic you actually care about (think: asymmetric routing, blocked ICMP, or DNS-only failures).
  • MTU issues are older than most VPN vendors. Path MTU black holes predate WireGuard by decades; WireGuard just makes them visible when you push encapsulated packets through fragile networks.

One “paraphrased idea” worth keeping taped to your monitor, attributed to Werner Vogels (reliability mindset):
paraphrased idea: Everything fails, all the time; design and operate as if failure is normal.

The 10 fixes (most cases, no drama)

Fix 1: Prove the server is actually listening on UDP (and on the port you think)

Half of “WireGuard Windows won’t connect” reports are just “the server is not listening anymore” or it’s listening on the wrong interface.
Maybe someone changed the port. Maybe the service didn’t restart. Maybe a cloud security rule changed silently.

What to do: on the server, verify the listen port and that packets arrive.
If the server never sees incoming UDP, stop tweaking Windows. The problem isn’t there.

Fix 2: Confirm the Windows WireGuard tunnel is using the expected endpoint

Windows users copy configs, then later someone updates the server IP, or DNS changes, or the endpoint resolves differently on a corporate network.
If your endpoint is a hostname, you’ve invited DNS into your connectivity story. Sometimes that’s fine. Sometimes it’s a midnight pager.

Fix 3: Stop guessing—validate keys and peer mapping on the server

WireGuard has no username/password login. It’s keys all the way down.
A single wrong character in a public key, or reusing a client config across machines, can cause a “connects sometimes” mess—especially when two clients share the same key and fight for the same peer slot.

Fix 4: Fix AllowedIPs on the client (the most common self-inflicted wound)

AllowedIPs decides what traffic goes into the tunnel. If your AllowedIPs doesn’t include the thing you’re trying to reach, WireGuard will behave perfectly while you experience failure perfectly.

  • Full tunnel: AllowedIPs = 0.0.0.0/0, ::/0
  • Split tunnel: include only internal prefixes, plus the server’s WG IP if needed.

If your goal is “reach internal subnets but keep internet local,” do split tunnel deliberately.
Don’t half-do full tunnel and wonder why Teams calls get weird.

Fix 5: Fix routing conflicts and interface metrics on Windows

Windows route selection is deterministic, not helpful. It picks the “best” route based on prefix length, metric, and interface cost.
If you have multiple VPNs, virtual adapters, or security software, you can end up with a route table that looks like a corporate org chart: complicated and occasionally imaginary.

Your job: ensure the WireGuard interface has routes for the destinations you need, and that no other interface steals them.

Fix 6: Fix DNS (because “VPN down” is often “DNS down”)

People blame WireGuard when ping 10.0.0.10 works but ping internal-app doesn’t.
That’s DNS. Fix DNS first; it’s cheaper than existential dread.

If you push internal DNS servers via the WireGuard config, make sure:
(1) those DNS servers are reachable through the tunnel, and
(2) Windows is actually using them on that interface.

Fix 7: Fix NAT and forwarding on the server (handshake without traffic)

A handshake proves the client and server can exchange encrypted control traffic.
It does not prove the server can forward packets from the tunnel to your LAN or the internet.

For full tunnel, the server usually needs IP forwarding and NAT (masquerade) on the egress interface.
For split tunnel into an internal LAN, you might need routes on your LAN gateway back to the WireGuard subnet.

Fix 8: Fix MTU (the “it connects but everything times out” classic)

MTU problems show up as: handshake present, small pings work, websites hang, RDP/SMB stalls, or only some apps work.
UDP encapsulation changes packet sizes. Some networks drop fragments or block ICMP “fragmentation needed.”
You get a path MTU black hole. Good times.

Fix 9: Fix Windows firewall and security tooling that “inspects” your VPN

Windows Defender Firewall can block the adapter, and corporate endpoint tools can block or “optimize” UDP.
If you’re on a managed laptop, assume you’re not the only one with opinions about network traffic.

Fix 10: Fix time skew and broken networking primitives

WireGuard isn’t TLS, but time still matters in the surrounding ecosystem: DNS, certificate validation for apps inside the tunnel, and log correlation.
Also, if your Windows network stack is in a weird state (stale routes, broken winsock catalog, half-installed virtual adapters), WireGuard will be blamed for crimes it didn’t commit.

Joke #1: A VPN “connected” indicator is like a dishwasher light—it tells you the machine has feelings, not that the plates are clean.

Practical tasks with commands (and how to decide)

These are tasks I actually run during incidents. Each includes a command, what the output means, and the decision you make.
Commands are shown with a Linux server prompt because most WireGuard servers are Linux; for Windows client validation we use the GUI plus server-side truth.

Task 1: On the server, confirm WireGuard is up and listening

cr0x@server:~$ sudo wg show
interface: wg0
  public key: 2y...serverpub...
  private key: (hidden)
  listening port: 51820

peer: 9Z...clientpub...
  endpoint: 203.0.113.55:61644
  allowed ips: 10.6.0.2/32
  latest handshake: 1 minute, 12 seconds ago
  transfer: 18.42 MiB received, 31.10 MiB sent

Meaning: If you see a listening port and a recent handshake, WireGuard is alive.
If the “listening port” is missing, the interface may be down or misconfigured.

Decision: No handshake and no endpoint seen? Focus on UDP reachability (firewall/NAT/port). Handshake present but transfer stuck? Move to routing/NAT/MTU.

Task 2: Verify the service and interface state

cr0x@server:~$ sudo systemctl status wg-quick@wg0
● wg-quick@wg0.service - WireGuard via wg-quick(8) for wg0
     Loaded: loaded (/lib/systemd/system/wg-quick@.service; enabled)
     Active: active (exited) since Sat 2025-12-27 09:11:04 UTC; 2h 13min ago
       Docs: man:wg-quick(8)
             man:wg(8)
    Process: 1234 ExecStart=/usr/bin/wg-quick up wg0 (code=exited, status=0/SUCCESS)

Meaning: Active (exited) is normal for wg-quick; it configures and exits.

Decision: If it’s failed, fix server config before touching Windows. If it’s active, continue.

Task 3: Check the UDP socket is bound (port mismatch catches)

cr0x@server:~$ sudo ss -lunp | grep 51820
UNCONN 0      0           0.0.0.0:51820      0.0.0.0:*    users:(("wireguard",pid=1240,fd=6))

Meaning: The system is listening on UDP 51820 on all IPv4 addresses.

Decision: If there’s no listener, fix wg0 or change the client config to match the actual port.

Task 4: Verify packets arrive at the server (prove the network path)

cr0x@server:~$ sudo tcpdump -ni any udp port 51820 -c 5
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
09:36:11.112233 eth0  IP 198.51.100.44.61644 > 203.0.113.10.51820: UDP, length 148
09:36:16.113901 eth0  IP 198.51.100.44.61644 > 203.0.113.10.51820: UDP, length 148

Meaning: The client is reaching your server’s UDP port.

Decision: If you see nothing while the client toggles the tunnel, fix perimeter firewall rules, cloud security groups, ISP blocks, or endpoint address.

Task 5: Confirm server-side forwarding is enabled

cr0x@server:~$ sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1

Meaning: IPv4 forwarding is enabled; required for routing from wg0 to another interface.

Decision: If it’s 0 and you expect the server to route traffic, enable it and persist it (then retest traffic).

Task 6: Check nftables/iptables NAT rules for full-tunnel setups

cr0x@server:~$ sudo iptables -t nat -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-A POSTROUTING -s 10.6.0.0/24 -o eth0 -j MASQUERADE

Meaning: Clients from 10.6.0.0/24 are NATed out eth0, so they can reach the internet (or upstream) without needing explicit return routes.

Decision: If this rule is missing and you want full-tunnel internet egress, add it. If you want split-tunnel to a LAN, you may prefer proper routing over NAT.

Task 7: Check that server has a route back to the client subnet and the LAN has a route back (split tunnel)

cr0x@server:~$ ip route
default via 203.0.113.1 dev eth0
10.0.10.0/24 dev br0 proto kernel scope link src 10.0.10.1
10.6.0.0/24 dev wg0 proto kernel scope link src 10.6.0.1

Meaning: The server knows the WireGuard subnet is on wg0.

Decision: If your target is 10.0.10.0/24 and clients can’t reach it, ensure the LAN side knows how to return traffic to 10.6.0.0/24 (via this server). Without that, you’ll see one-way flows and sadness.

Task 8: Confirm the peer’s AllowedIPs on the server (wrong subnets, wrong /32)

cr0x@server:~$ sudo wg show wg0 allowed-ips
9Z...clientpub... 10.6.0.2/32

Meaning: The server will route 10.6.0.2 to this peer. That’s typical for client addressing.

Decision: If you accidentally assigned the same client IP to two peers, or gave a /24 to one peer, you’ll cause misrouting. Fix addressing and AllowedIPs.

Task 9: Check for duplicate keys (two devices sharing a config)

cr0x@server:~$ sudo wg show wg0 | sed -n '1,120p'
interface: wg0
  public key: 2y...serverpub...
  listening port: 51820

peer: 9Z...clientpub...
  endpoint: 198.51.100.44:61644
  latest handshake: 24 seconds ago
  transfer: 2.10 MiB received, 3.90 MiB sent

peer: 9Z...clientpub...
  endpoint: 203.0.113.200:54321
  latest handshake: 3 seconds ago
  transfer: 2.11 MiB received, 3.91 MiB sent

Meaning: Same public key appearing twice is a red flag. In real life you’ll usually see one peer entry, but the endpoint will “flip” between two IPs as devices fight.

Decision: Issue unique keypairs per device. Never clone configs with private keys across laptops.

Task 10: Test reachability from the server into the tunnel (server to client)

cr0x@server:~$ ping -c 3 10.6.0.2
PING 10.6.0.2 (10.6.0.2) 56(84) bytes of data.
64 bytes from 10.6.0.2: icmp_seq=1 ttl=128 time=32.1 ms
64 bytes from 10.6.0.2: icmp_seq=2 ttl=128 time=31.7 ms
64 bytes from 10.6.0.2: icmp_seq=3 ttl=128 time=33.0 ms

--- 10.6.0.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms

Meaning: The server can reach the client’s tunnel IP. That’s a good sign for routing and firewall on the client side.

Decision: If this fails but handshake exists, suspect Windows firewall on the tunnel adapter, or that ICMP is blocked. Test TCP/UDP flows too.

Task 11: Identify MTU black holes with a “do not fragment” probe (server-side)

cr0x@server:~$ ping -c 3 -M do -s 1360 10.6.0.2
PING 10.6.0.2 (10.6.0.2) 1360(1388) bytes of data.
From 10.6.0.1 icmp_seq=1 Frag needed and DF set (mtu = 1420)
From 10.6.0.1 icmp_seq=2 Frag needed and DF set (mtu = 1420)
From 10.6.0.1 icmp_seq=3 Frag needed and DF set (mtu = 1420)

--- 10.6.0.2 ping statistics ---
3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2046ms

Meaning: Path MTU is lower than your payload size; the error suggests an MTU like 1420.

Decision: Set the WireGuard interface MTU lower (commonly 1280–1420 depending on path). Retest until the “do not fragment” probe succeeds.

Task 12: Check the server firewall filter rules for wg0 traffic

cr0x@server:~$ sudo iptables -S
-P INPUT DROP
-P FORWARD DROP
-P OUTPUT ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p udp --dport 51820 -j ACCEPT
-A FORWARD -i wg0 -o eth0 -j ACCEPT
-A FORWARD -i eth0 -o wg0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT

Meaning: UDP 51820 is allowed in, and forwarding between wg0 and eth0 is permitted.

Decision: If forwarding rules are missing, handshake may work but forwarded traffic will be dropped. Fix FORWARD rules (or nftables equivalent) and retest.

Task 13: Confirm DNS server reachability through the tunnel (server-side example)

cr0x@server:~$ dig @10.0.10.53 internal-app.example A +time=2 +tries=1
; <<>> DiG 9.18.24 <<>> @10.0.10.53 internal-app.example A +time=2 +tries=1
;; ANSWER SECTION:
internal-app.example. 60 IN A 10.0.10.25

;; Query time: 34 msec

Meaning: The internal DNS server responds quickly from the server’s network perspective.

Decision: If DNS is reachable from the server but not from the client, focus on client routing/DNS binding. If it’s not reachable from the server either, fix LAN routing/firewall first.

Task 14: Validate the server’s peer config file hasn’t drifted

cr0x@server:~$ sudo grep -nE '^\[Interface\]|\[Peer\]|ListenPort|Address|AllowedIPs|PostUp|PostDown' /etc/wireguard/wg0.conf
1:[Interface]
2:Address = 10.6.0.1/24
3:ListenPort = 51820
4:PostUp = iptables -t nat -A POSTROUTING -s 10.6.0.0/24 -o eth0 -j MASQUERADE
5:PostDown = iptables -t nat -D POSTROUTING -s 10.6.0.0/24 -o eth0 -j MASQUERADE
7:[Peer]
8:PublicKey = 9Z...clientpub...
9:AllowedIPs = 10.6.0.2/32

Meaning: You’re checking the exact keys and routing directives that matter for “connectivity versus no connectivity.”

Decision: If config has old PostUp rules, wrong interface name, or conflicting AllowedIPs, fix it, restart wg-quick, and retest.

Joke #2: MTU bugs are the kind of problem that makes you miss the days when your network only failed loudly.

Common mistakes: symptom → root cause → fix

1) “Tunnel is active” but nothing works

Symptom: WireGuard UI shows Active; no websites, no internal apps, no pings.

Root cause: AllowedIPs doesn’t include the destinations; or the Windows route table sends traffic elsewhere.

Fix: Decide split vs full tunnel, then set AllowedIPs accordingly. Verify routes exist and aren’t being overridden by another adapter’s metric.

2) Handshake never appears (Latest handshake: never)

Symptom: Client toggles on/off; server shows no endpoint and no handshake.

Root cause: UDP blocked, wrong endpoint/port, server not listening, cloud firewall rule missing, or NAT not forwarding.

Fix: Server-side ss -lunp and tcpdump. If packets don’t arrive, fix perimeter. If they arrive but no handshake, re-check keys and peer mapping.

3) Handshake exists, but can’t reach LAN resources

Symptom: Latest handshake updates, but internal subnets are dead.

Root cause: Missing server forwarding/NAT rules, or LAN doesn’t route back to the WireGuard subnet.

Fix: Enable IP forwarding; add NAT for full tunnel, or add return routes on the LAN gateway for split tunnel.

4) IP connectivity works; DNS fails

Symptom: ping 10.0.10.25 works but names don’t resolve; browsers spin.

Root cause: Wrong DNS servers in config, Windows not using the tunnel DNS, or DNS servers aren’t reachable via the tunnel.

Fix: Ensure DNS server IP is inside AllowedIPs and reachable; confirm Windows binds DNS to the WireGuard interface; avoid pushing public DNS for internal-only zones.

5) Works on home Wi‑Fi, fails on corporate network / hotel

Symptom: Same laptop, same config; different network causes “never handshake” or flapping.

Root cause: Outbound UDP restricted, captive portal, symmetric NAT weirdness, or aggressive UDP timeout.

Fix: Set PersistentKeepalive = 25 on the client for NATed networks; consider moving the server to a more reachable port; confirm the network allows outbound UDP at all.

6) Some sites load; others hang; RDP/SMB stalls

Symptom: Partial success; timeouts; large downloads freeze.

Root cause: MTU/fragmentation black hole.

Fix: Lower MTU on the tunnel; test with DF pings and real app traffic.

7) “It used to work yesterday” after a Windows update

Symptom: Post-update, the adapter is missing or traffic is blocked.

Root cause: Driver issue, changed firewall profile, new network category, or security software reasserted policies.

Fix: Reinstall WireGuard, re-check firewall rules for the WireGuard interface, confirm adapter exists and is enabled.

8) Two users can’t connect at the same time

Symptom: One connects, the other drops; endpoints flip on the server.

Root cause: Shared client private key (cloned config).

Fix: Unique keys per device; unique tunnel IP per peer; audit configs for reuse.

Checklists / step-by-step plan

Step-by-step: from “won’t connect” to root cause

  1. Define the failure: “No handshake” vs “handshake but no traffic” vs “traffic but DNS broken.” Don’t skip this.
  2. Server truth first: run wg show. Confirm listening port and whether the server sees an endpoint.
  3. Network path: tcpdump on UDP port while toggling the Windows tunnel.
  4. Keys and peer mapping: verify server peer public key matches the Windows client public key; confirm each peer’s AllowedIPs are correct and non-overlapping.
  5. Routing policy: decide split or full tunnel. Then set AllowedIPs and server NAT/routes consistently with that choice.
  6. Forwarding/NAT: enable IP forwarding; add NAT masquerade for full tunnel; or configure LAN return routes for split tunnel.
  7. DNS: confirm DNS server is reachable through the tunnel and that Windows is using it for queries destined to internal zones.
  8. MTU: if things “almost work,” test DF ping sizes and lower MTU.
  9. Security tooling: if everything is correct but it still fails on managed endpoints, suspect firewall/EDR policies; get the policy owner involved early.
  10. Stabilize: add keepalive only if NAT requires it; document ports, subnets, and ownership.

Configuration sanity checklist (client + server)

  • Endpoint IP/hostname is correct; port matches server listener.
  • Client has unique private key; server has the matching peer public key.
  • Client AllowedIPs includes the destinations you expect to route through the tunnel.
  • Server peer AllowedIPs includes the client tunnel IP (/32) and doesn’t overlap other peers.
  • Server has net.ipv4.ip_forward=1 if routing beyond wg0 is required.
  • NAT/forwarding rules exist if doing full tunnel egress.
  • DNS servers pushed to client are reachable through the tunnel.
  • MTU is set appropriately for the path; avoid “default forever” thinking.

Three corporate mini-stories from the trenches

Mini-story 1: The incident caused by a wrong assumption (DNS is “always reachable”)

A mid-sized company rolled out WireGuard to let engineers reach internal dashboards. Split tunnel, simple subnets, everything tidy.
The config pushed two internal DNS servers, because the internal apps lived under internal hostnames. Reasonable.

On Monday morning, tickets exploded: “VPN connects but nothing works.” The helpdesk saw “Active” and told people to reboot.
Rebooting did nothing except waste everyone’s coffee window.

The wrong assumption: that the DNS servers were reachable from the tunnel. They weren’t.
A network change the previous week moved DNS to a different VLAN and tightened ACLs. The WireGuard subnet was not included.
Clients could reach internal IPs if they knew them, but name resolution failed, and most apps didn’t even try IP fallbacks.

The fix was boring and correct: either allow the WireGuard subnet to query DNS (preferred), or push a DNS server that is reachable from the WireGuard subnet.
After the ACL update, the “VPN outage” vanished instantly—no WireGuard changes required.

The lesson: treat DNS as a dependency you must route and allow explicitly. In corporate networks, DNS is rarely “just there”; it’s guarded like a museum exhibit.

Mini-story 2: The optimization that backfired (clever NAT shortcuts)

Another team wanted full-tunnel VPN so laptops would egress through a central security stack. They built a WireGuard gateway VM.
To “optimize,” they used a minimal NAT rule and removed what they thought was redundant forwarding logic.
They also tuned firewall defaults to be restrictive, because security reviews love restrictive defaults.

It worked in a basic ping test. They declared victory and shipped it.
Then application traffic started failing in strange ways: some HTTPS sites loaded, others timed out; large downloads stalled; software updates failed unpredictably.
It looked like random internet flakiness, which is the most expensive kind of flakiness.

The backfire was twofold. First: they didn’t validate MTU end-to-end. Encapsulation plus a cloud path with odd MTU behavior created a fragmentation black hole.
Second: their “minimal” firewall rule set allowed established return traffic only on one interface direction; some flows got dropped depending on which path the kernel chose.

The remediation was unglamorous: explicit FORWARD rules for wg0 ↔ egress, proper conntrack handling, and an MTU lowered to a tested safe value.
Performance didn’t get worse. Reliability improved dramatically, which is a performance metric adults care about.

The lesson: “optimizing” network policy by removing lines you don’t understand is like “optimizing” an airplane by removing bolts you can’t name.

Mini-story 3: The boring practice that saved the day (unique keys and peer inventory)

A global org had a fleet of contractor laptops. Contractors came and went; devices were reimaged frequently.
The VPN team instituted a tedious rule: every device gets unique WireGuard keys, unique tunnel IP, and an inventory record.
No shared configs. No “just copy Alice’s file.”

People complained. It was extra work. It slowed “quick access” requests.
But it made the system observable: when a peer appeared on the server, you knew which device it was. When it misbehaved, you could disable it safely.

One Friday, connectivity began flapping for a subset of users. The server logs weren’t very chatty (WireGuard is famously minimalist),
but wg show showed endpoints “roaming” rapidly for a particular public key.
That pattern screamed “duplicate key in the wild.”

Because they had inventory, they identified the device, contacted the owner, and rotated keys without touching anyone else.
No broad outage, no mass reconfig, no guessing which peer was which.
The boring practice paid for itself in a single afternoon.

The lesson: uniqueness and inventory are reliability features. They don’t feel like features until the day you need them.

FAQ

1) Why does WireGuard show “Active” on Windows but I have no internet?

“Active” only means the interface is up and the config is loaded. It doesn’t guarantee routing is correct.
Check handshake, then check AllowedIPs and whether Windows routes 0.0.0.0/0 (full tunnel) through WireGuard or not.

2) I have a handshake, but I can’t reach internal subnets. What’s the fastest explanation?

The server isn’t forwarding traffic from wg0 to the LAN, or the LAN doesn’t have a route back to the WireGuard subnet.
Handshake proves “we can talk.” It doesn’t prove “we can route.”

3) Do I need PersistentKeepalive on Windows?

Only if the Windows client is behind NAT/firewalls that drop idle UDP mappings. If you see handshakes only after you send traffic,
or connectivity dies after a few minutes idle, set PersistentKeepalive = 25 on the client peer section.

4) What AllowedIPs should I use for split tunnel?

Include the internal networks you want to reach (example: 10.0.0.0/8 or specific /24s), plus any other private ranges you truly need.
Don’t throw in 0.0.0.0/0 “just in case.” That’s not split tunnel; that’s you outsourcing your routing decisions to future-you.

5) Why does it work on my phone hotspot but not on the office Wi‑Fi?

Office networks often restrict outbound UDP, enforce captive portals, or do stateful inspection that times out UDP quickly.
Prove it with server-side tcpdump: if packets never arrive from the office network, it’s not a Windows config problem.

6) Can two Windows laptops use the same WireGuard config?

No. Not safely. If they share the same private key, the server will treat them as the same peer and the endpoint will “roam” between them.
You’ll get flapping, intermittent traffic, and blame that travels faster than UDP.

7) What MTU should I set for WireGuard on Windows?

There’s no universal value. Common safe starting points are 1420 or 1380; for hostile paths, 1280 is the “just make it work” value.
If you have handshake but application stalls, test PMTU and lower MTU until the stalling stops.

8) Should I use a hostname or IP for the Endpoint?

IP is more deterministic. Hostnames are fine if DNS is reliable and controlled, but it adds a dependency.
If you must use hostnames, make sure the Windows client can resolve it on every network you care about (including captive portals and locked-down DNS).

9) Why can I ping the server’s WireGuard IP but not reach anything behind it?

Because reaching the server’s wg0 address is just a direct tunnel hop. Reaching anything behind it requires forwarding and often NAT or return routes.
Fix server forwarding, firewall, and upstream routing.

10) Is WireGuard “less compatible” than other VPNs on corporate networks?

It’s often blocked more simply because it’s UDP and easy to identify by behavior (not necessarily by payload).
Many corporate environments explicitly allow TLS/443 but treat arbitrary UDP as suspicious. That’s policy, not a WireGuard bug.

Next steps that keep it stable

If you want WireGuard on Windows to be boring again, operate it like any production dependency:
measure reality (handshake, routes, DNS), enforce uniqueness (keys, IPs), and document the intended routing model (split vs full).
Most “won’t connect” tickets vanish once you stop treating the client UI as truth and start treating the server as the source of record.

  1. Pick a standard: split tunnel or full tunnel, and make AllowedIPs match it.
  2. On the server, keep a verified baseline: wg show, listener check, forwarding, firewall/NAT rules.
  3. Write down ownership: who controls DNS, who controls perimeter UDP rules, who controls endpoint security policies.
  4. When it fails: follow the Fast diagnosis playbook—handshake → IP reachability → DNS → MTU.

The goal isn’t “VPN connects.” The goal is “the packets you care about reach the services you pay for,” consistently, on the networks your users actually sit on.

← Previous
Debian 13 “Start request repeated too quickly”: systemd fixes that actually stick
Next →
486: why the built-in FPU changed everything (and nobody talks about it)

Leave a comment