WireGuard AllowedIPs Confusion: Why Traffic Doesn’t Go Where You Expect (and How to Fix It)

Was this helpful?

You bring up a WireGuard tunnel. Handshake looks healthy. Bytes increment a little. And yet the traffic you wanted over the VPN
is still going out the wrong interface, or worse: disappearing into a silent void where pings go to retire.

Nine times out of ten, the root cause isn’t cryptography, MTU, or “the internet is down.” It’s a misunderstanding of
AllowedIPs: what it means, who uses it, and how it turns into actual routes on a real operating system.

The mental model: AllowedIPs is routing policy, not an ACL

If you take away one thing: AllowedIPs is a routing decision input.
It is not a firewall rule. It is not “who is allowed to connect.”
It’s also not “what IPs exist on the other side.”

In WireGuard, every peer has a list of AllowedIPs. That list does two jobs:

  1. Outbound peer selection. When the local host wants to send a packet to some destination IP,
    WireGuard picks the peer whose AllowedIPs contains that destination. That’s effectively
    “which peer should carry this packet.”
  2. Inbound source validation. When a packet arrives from a peer, WireGuard checks whether the
    packet’s source IP is inside that peer’s AllowedIPs. If not, WireGuard drops it.

That’s it. It doesn’t magically program your whole network unless something else (usually wg-quick)
translates those prefixes into kernel routes. And even then, it’s still just routes.

Here’s the failure mode that keeps SREs employed: people assume AllowedIPs is “remote subnets.”
Sometimes it is. Sometimes it’s “remote subnets plus default route because I want full tunnel.”
And sometimes it’s “the IP of the peer itself.” But none of those are universally correct. They’re intents,
and intents need matching routing, forwarding, NAT, and DNS behaviors.

Another key detail: WireGuard does not speak “routes” directly. It speaks “peers and allowed prefixes.”
The OS kernel speaks routes. wg-quick is the translator and occasional chaos monkey.

Joke #1: AllowedIPs is called “Allowed” because “PleaseStopSendingMyPacketsIntoTheWrongTunnelIPs”
didn’t fit in a config file.

Interesting facts and historical context

A bit of context makes the weirdness feel less personal.

  • WireGuard entered the Linux kernel in 2020 (Linux 5.6). Before that, it lived out-of-tree, which shaped its minimalist interfaces.
  • WireGuard intentionally avoids “mode switches” like many traditional VPNs (transport vs tunnel modes). Instead, it’s always L3: you route IP packets.
  • wg (the tool) does not add routes. That was a deliberate separation: the VPN driver knows peers; the OS knows routing.
  • wg-quick is a convenience wrapper that uses standard tools (ip, resolvconf/systemd-resolved, iptables/nft) to create a “VPN experience.”
  • AllowedIPs resembles a compressed routing table: it’s essentially the per-peer equivalent of “these prefixes belong over here,” plus an anti-spoofing check.
  • WireGuard uses a “cryptokey routing” concept: instead of selecting a tunnel by interface, it selects a peer by public key and destination prefix match.
  • Linux supports multiple routing tables and rules (policy routing). wg-quick uses them for full-tunnel setups to avoid clobbering the main table.
  • Overlapping prefixes are legal in WireGuard config, but the selection rules (longest prefix match) can surprise you in multi-peer deployments.
  • IPv6 often fails first because people set IPv4 AllowedIPs carefully and forget IPv6 defaults or DNS AAAA answers. The tunnel works; the browser doesn’t.

How packets actually move: kernel routes, WireGuard, and wg-quick

The three routing layers you’re dealing with (whether you like it or not)

When you run WireGuard on Linux, there are three relevant decision points:

  1. Linux routing: picks an egress interface and next hop for a destination IP using route tables and policy rules.
  2. WireGuard peer selection: inside the WireGuard interface, picks which peer should carry the packet based on destination matching against AllowedIPs.
  3. Remote side routing/forwarding: the far end must know what to do with the packet next (deliver locally, forward, NAT, etc.).

If any of those three layers disagree, you get the classic “handshake but no traffic” experience.
Handshake success only proves that UDP can reach the peer and keys are valid. It says nothing about
the return path, NAT, or forwarding.

What wg-quick really does (and why you should read its output)

wg-quick up wg0 is convenient. It’s also opinionated:

  • It creates the interface (ip link add wg0 type wireguard).
  • It assigns interface addresses from Address =.
  • It configures WireGuard peers and their AllowedIPs.
  • It adds routes for each AllowedIPs prefix (unless policy routing is used).
  • For full-tunnel (0.0.0.0/0 or ::/0), it often adds policy routing rules and a separate table.
  • It may set up DNS (depending on DNS = and your resolver stack).
  • It may add firewall rules if you use PostUp/PostDown or distro integrations.

The critical part: those routes are not “WireGuard routes.” They’re kernel routes.
If you later tweak AllowedIPs and don’t rerun wg-quick (or manually adjust routes),
the kernel routing table may not match your intent anymore. That mismatch is subtle and common.

The “default route through WireGuard” pattern

Setting AllowedIPs = 0.0.0.0/0, ::/0 for a peer says:
for any destination IP, choose this peer. But it doesn’t necessarily mean the kernel will send
every packet to the wg0 interface. That depends on routing rules.

On Linux, wg-quick usually implements full-tunnel like this:

  • Add a default route in a separate routing table (often table 51820).
  • Add ip rule entries so that traffic originating from the WireGuard interface IP
    (or marked packets) uses that table.
  • Add exceptions so the peer’s endpoint IP still goes out via the real uplink, not through the tunnel
    (to avoid tunneling the tunnel).

That exception is why “my VPN came up and immediately killed itself” sometimes happens: if the endpoint exception
is wrong (because of DNS changes, dual-stack confusion, or you’re behind CGNAT and the endpoint is variable),
your UDP packets to the endpoint get routed into the tunnel. The tunnel then can’t reach the endpoint.
It’s self-sabotage, with perfect logic.

Fast diagnosis playbook

You want speed. You also want to avoid the classic trap of changing five things at once and then
not knowing which one fixed it.

First: establish whether packets are entering and leaving the WireGuard interface

  1. Check handshake + transfer counters.
  2. Check kernel routing decision for the destination with ip route get (IPv4 and IPv6).
  3. Capture on wg0 and on the physical interface to see where packets actually go.

Second: confirm the peer selection logic matches your AllowedIPs intent

  1. List peers and their AllowedIPs.
  2. Look for overlaps (e.g., two peers both claim 10.0.0.0/8).
  3. Validate inbound packets’ source addresses are inside the sender’s AllowedIPs (anti-spoofing drop).

Third: validate forwarding/NAT and the return path on the far end

  1. Is IP forwarding enabled on the “server” or gateway peer?
  2. Does the remote network know a route back to the VPN client subnet?
  3. If you rely on NAT, is it actually applied to the correct interface and source range?

Fourth: handle the boring killers: MTU and DNS

  1. Test PMTU with ping -M do (IPv4) and realistic packet sizes.
  2. Confirm DNS servers and search domains are what you think they are after tunnel up.
  3. Confirm IPv6 isn’t leaking or blackholing (especially with full-tunnel IPv4 only).

Practical tasks: commands, outputs, and decisions

This is the part you run during an incident. Every task includes: command, what the output means,
and what decision you make next.

Task 1: Confirm WireGuard sees the peer and is exchanging packets

cr0x@server:~$ sudo wg show
interface: wg0
  public key: 9zHhYw...redacted...
  private key: (hidden)
  listening port: 51820

peer: 2YpG3h...redacted...
  endpoint: 203.0.113.50:54022
  allowed ips: 10.40.0.2/32, 10.99.0.0/16
  latest handshake: 28 seconds ago
  transfer: 14.20 MiB received, 18.77 MiB sent
  persistent keepalive: every 25 seconds

Meaning: handshake is recent and counters are non-zero. WireGuard can reach the endpoint and decrypt/encrypt traffic.
Still not proof that the OS routes traffic into wg0, or that the far end forwards it.

Decision: if handshake is stale (minutes+) or transfer stays zero, focus on UDP reachability, endpoint address,
and firewall/NAT. If it’s healthy, move on to routing.

Task 2: Check interface addresses (the “what IP am I even using” step)

cr0x@server:~$ ip -brief address show dev wg0
wg0             UP             10.40.0.1/24 fd00:40::1/64

Meaning: wg0 has IPv4 and IPv6 addresses. If your peer expects /32 but you configured /24, that’s not automatically wrong,
but it affects your routing assumptions and what you NAT.

Decision: confirm both sides agree on addressing scheme (host routes vs subnets), and that
AllowedIPs includes the peer’s source addresses.

Task 3: Ask the kernel where it will send a specific destination

cr0x@server:~$ ip route get 10.99.10.25
10.99.10.25 dev wg0 src 10.40.0.1 uid 0
    cache

Meaning: the kernel would send packets to 10.99.10.25 via wg0. Good.
If it says dev eth0 or another interface, the kernel routing table is not aligned with your intent.

Decision: if routing is wrong, inspect routes and policy rules (ip route, ip rule).
Don’t touch WireGuard config yet.

Task 4: Do the same for IPv6 (because dual-stack loves to gaslight you)

cr0x@server:~$ ip -6 route get 2606:4700:4700::1111
2606:4700:4700::1111 via fe80::1 dev eth0 src 2001:db8:12::10 metric 1024 pref medium

Meaning: IPv6 traffic is going out eth0, not wg0. If you expected a full tunnel, you probably forgot
::/0 in AllowedIPs or the policy routing rules for IPv6 weren’t added.

Decision: decide if you want IPv6 tunneled, blocked, or left alone. “I don’t know” is how you end up
with half a tunnel and a confusing outage.

Task 5: Inspect routes added for AllowedIPs

cr0x@server:~$ ip route show table main | grep -E 'wg0|10\.99\.|10\.40\.'
10.40.0.0/24 dev wg0 proto kernel scope link src 10.40.0.1
10.99.0.0/16 dev wg0 scope link

Meaning: you have a direct route to 10.99.0.0/16 via wg0. That aligns with the peer’s AllowedIPs.
If the route is missing, your system won’t even try to put traffic into the tunnel.

Decision: if routes are missing, check whether you used wg (no routes) vs wg-quick (adds routes),
or whether NetworkManager/systemd-networkd is managing routes elsewhere.

Task 6: Check policy routing rules (full tunnel and “why is only some traffic going via wg0”)

cr0x@server:~$ ip rule show
0:      from all lookup local
32764:  from all fwmark 0xca6c lookup 51820
32765:  from all lookup main suppress_prefixlength 0
32766:  from all lookup main
32767:  from all lookup default

Meaning: traffic marked with fwmark 0xca6c uses table 51820.
The suppress_prefixlength 0 rule is a common wg-quick trick to avoid routing loops.

Decision: if you expected full-tunnel but don’t see these rules, wg-quick likely didn’t apply them
(or another tool removed them). Fix routing first; don’t randomly widen AllowedIPs.

Task 7: Inspect the WireGuard-specific routing table (if used)

cr0x@server:~$ ip route show table 51820
default dev wg0 scope link
203.0.113.50 via 192.0.2.1 dev eth0

Meaning: in table 51820, default goes via wg0, but the peer endpoint 203.0.113.50 is explicitly routed via eth0’s gateway.
That’s the “don’t tunnel the tunnel” exception.

Decision: if the endpoint exception is missing or points to the wrong gateway, the tunnel may flap or never establish.
Fix endpoint routing, especially if the endpoint is a hostname that resolves to multiple IPs.

Task 8: Verify forwarding is enabled on a gateway peer

cr0x@server:~$ sysctl net.ipv4.ip_forward net.ipv6.conf.all.forwarding
net.ipv4.ip_forward = 0
net.ipv6.conf.all.forwarding = 0

Meaning: this node will not forward packets between interfaces. If it’s supposed to act as a site-to-site gateway,
that’s a hard stop.

Decision: enable forwarding (and persist it) if this host should route traffic; otherwise redesign:
keep it as a host-only tunnel endpoint.

Task 9: Confirm firewall/NAT rules match the design (iptables example)

cr0x@server:~$ sudo iptables -t nat -S | grep -E 'POSTROUTING|wg0|eth0'
-A POSTROUTING -s 10.40.0.0/24 -o eth0 -j MASQUERADE

Meaning: packets sourced from 10.40.0.0/24 leaving eth0 will be NATed. If your clients live in 10.40.0.0/24 and you want them
to reach the internet through this server, this is likely required.

Decision: if you need site-to-site without NAT, do not masquerade. Instead, add routes on the remote network back to 10.40.0.0/24.
NAT “works” until it breaks auditing, inbound connectivity, and your will to live.

Task 10: Capture traffic to see the wrong turn

cr0x@server:~$ sudo tcpdump -ni wg0 host 10.99.10.25
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on wg0, link-type RAW (Raw IP), snapshot length 262144 bytes
12:11:02.100112 IP 10.40.0.1 > 10.99.10.25: ICMP echo request, id 2101, seq 1, length 64
12:11:03.110144 IP 10.40.0.1 > 10.99.10.25: ICMP echo request, id 2101, seq 2, length 64

Meaning: packets are entering wg0. If there are no replies, the issue is likely beyond this host:
remote routing, remote firewall, or AllowedIPs source validation on the remote side.

Decision: capture on the remote end too. If the remote receives but doesn’t reply, inspect its routes and firewall.
If it never receives, inspect endpoint NAT and whether you’re sending to the correct peer.

Task 11: Inspect WireGuard peer AllowedIPs overlaps (the stealth foot-gun)

cr0x@server:~$ sudo wg show wg0 allowed-ips
2YpG3h...redacted...	10.40.0.2/32
2YpG3h...redacted...	10.99.0.0/16
7QkLm1...redacted...	10.99.10.0/24

Meaning: there is an overlap: one peer claims 10.99.0.0/16, another claims 10.99.10.0/24.
Longest prefix match means 10.99.10.x will go to the /24 peer, not the /16 peer.

Decision: decide if that’s intentional. If not, remove or narrow one of the prefixes.
If yes, document it loudly and test failover behavior, because it’s easy to break during onboarding.

Task 12: Verify what wg-quick actually applied (audit the “translator”)

cr0x@server:~$ sudo wg-quick up wg0
[#] ip link add wg0 type wireguard
[#] wg setconf wg0 /dev/fd/63
[#] ip -4 address add 10.40.0.1/24 dev wg0
[#] ip link set mtu 1420 up dev wg0
[#] ip -4 route add 10.99.0.0/16 dev wg0
[#] resolvconf -a tun.wg0 -m 0 -x
[#] iptables -A FORWARD -i wg0 -j ACCEPT
[#] iptables -A FORWARD -o wg0 -j ACCEPT
[#] iptables -t nat -A POSTROUTING -s 10.40.0.0/24 -o eth0 -j MASQUERADE

Meaning: this is the real change set. You can see the MTU, the routes, DNS changes, and iptables rules.
If your mental model differs from this output, your mental model is wrong.

Decision: if wg-quick output shows changes you did not intend (DNS manipulation, NAT, forwarding),
stop using implicit behavior and pin it down with explicit config or your own orchestration.

Task 13: Check reverse path filtering (a quiet dropper)

cr0x@server:~$ sysctl net.ipv4.conf.all.rp_filter net.ipv4.conf.wg0.rp_filter
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.wg0.rp_filter = 1

Meaning: strict reverse path filtering can drop packets if the kernel thinks the return route for the source IP
wouldn’t go back out the same interface. With policy routing and asymmetric paths, this can break WireGuard traffic in creative ways.

Decision: on routing-heavy gateways, consider setting rp_filter to loose mode (2) where appropriate,
but do it intentionally and with security review.

Task 14: Validate MTU/PMTU quickly (don’t guess, measure)

cr0x@server:~$ ping -c 3 -M do -s 1372 10.99.10.25
PING 10.99.10.25 (10.99.10.25) 1372(1400) bytes of data.
1380 bytes from 10.99.10.25: icmp_seq=1 ttl=62 time=31.2 ms
1380 bytes from 10.99.10.25: icmp_seq=2 ttl=62 time=30.9 ms
1380 bytes from 10.99.10.25: icmp_seq=3 ttl=62 time=31.0 ms

--- 10.99.10.25 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms

Meaning: if this succeeds at a realistic size, fragmentation issues are less likely.
If it fails with “Message too long,” your MTU is too high somewhere along the path.

Decision: adjust MTU on wg0 (or let wg-quick set it), especially across PPPoE, LTE, or nested tunnels.

Task 15: Confirm routes on the remote side (because return paths are a thing)

cr0x@server:~$ ssh ops@remote-gw 'ip route show | grep 10.40.0.0/24'
10.40.0.0/24 dev wg0 scope link

Meaning: remote side knows 10.40.0.0/24 is reachable via wg0, so replies can return.
If the remote is not the endpoint but a LAN router behind it, you need routes there too.

Decision: if return route is missing, add it (preferred) or use NAT (last resort) depending on your security and observability needs.

Peer selection and the “most specific prefix wins” trap

WireGuard’s peer selection is deterministic and simple: it finds the peer with an AllowedIPs entry
that best matches the destination IP using longest-prefix match. That’s basically how routing tables work too.
Simplicity is good—until you accidentally create a policy you didn’t mean.

What happens with overlaps

Imagine you have:

  • Peer A: AllowedIPs = 10.0.0.0/8 (a “catch-all corp network” peer)
  • Peer B: AllowedIPs = 10.42.0.0/16 (a specific site-to-site)

Traffic to 10.42.x.y goes to Peer B. Traffic to 10.99.x.y goes to Peer A.
That’s fine when intentional. It’s also a source of “why did prod traffic go to the lab VPN” incidents
when someone copies a config stanza.

Overlaps also affect inbound validation: if Peer B sends a packet with source 10.99.1.10
but Peer B doesn’t have 10.99.0.0/16 in its AllowedIPs, the receiver drops the packet.
So overlaps can cause outbound steering and inbound drops depending on who believes what.

Don’t treat AllowedIPs like “advertised routes” unless you build the system around it

In some VPN systems, the control plane distributes routes dynamically. WireGuard does not.
If you want something route-distribution-like, you have to build it (with config management, templating,
or your own controller) and you must then manage overlaps as a first-class concern.

A pragmatic rule: avoid overlaps unless you’re doing it on purpose and you can explain the reason in one sentence.
If you can’t, you’re accumulating future outages.

Split tunnel vs full tunnel: what changes and what breaks

Split tunnel: only send specific subnets over wg0

This is the sane default for most corporate and site-to-site cases. Example intent:
“Only 10.99.0.0/16 should go over WireGuard.”

Configuration pattern:

  • Client peer AllowedIPs includes only remote private ranges and maybe the server’s wg IP.
  • Kernel routes for those ranges point to wg0.
  • Default route stays on the normal network.

Failure modes:

  • Remote uses DNS names that resolve to public IPs; traffic goes out the default route, not the tunnel.
  • IPv6 AAAA records bypass your IPv4-only tunnel.
  • Application pins IPs; your “only subnet X” assumption is wrong for how the service is deployed.

Full tunnel: route the internet through the VPN

This is useful for untrusted networks, egress control, or consistent source IP. It’s also where
most AllowedIPs confusion becomes expensive.

Configuration pattern:

  • Client peer AllowedIPs includes 0.0.0.0/0 and usually ::/0.
  • Policy routing is used so the endpoint IP is exempt and the system doesn’t self-tunnel its control traffic.
  • Server/gateway must NAT or route client traffic onward.

Failure modes:

  • DNS breaks: your resolver is reachable only over the tunnel, but the tunnel depends on DNS to find the endpoint.
  • Endpoint changes IP; your exception route points to yesterday’s address; tunnel dies on reconnect.
  • Only IPv4 is tunneled, IPv6 leaks or blackholes, browsers get “random” timeouts.

Joke #2: Full-tunnel VPNs are like moving houses: you learn how much stuff you own when you have to route all of it.

Three corporate mini-stories from the trenches

Incident: the wrong assumption that “AllowedIPs is an ACL”

A mid-sized company rolled out WireGuard for contractor access. The design was straightforward: contractors should reach a handful of internal
services (a Git server, a ticket system, a build cache). The security team insisted: “Only allow those IPs.”
The network team complied by setting client configs with a narrow AllowedIPs list: a few /32s for the services.

The tunnel came up. Handshakes looked good. Contractors still couldn’t reach the services reliably. Worse, the failures were intermittent,
which is the most expensive kind of failure because everyone wastes time proving it isn’t them.
The on-call SRE looked at WireGuard stats: traffic was leaving the client, arriving at the server, and then… nothing.

The root cause was subtle but predictable: the internal services were behind load balancers and sometimes answered from different IPs.
Contractors connected to the VIP (one of the /32s), but subsequent connections were redirected to backend IPs not present in AllowedIPs.
The client’s kernel routed those backend connections out the normal interface, not wg0. From the server’s perspective, it never even saw
those packets. From the user’s perspective, “VPN is flaky.”

The fix was not “widen everything to 0.0.0.0/0.” The fix was aligning routing intent with service topology:
they switched from per-host /32s to specific internal subnets that actually contained the load balancer and backends, plus they pinned DNS
for those services to internal addresses reachable via the tunnel. They also documented the rule: AllowedIPs governs routing,
not permissioning
. Firewall rules handle permissioning.

The lesson: if you try to use AllowedIPs as a security boundary, you’ll build an unreliable security boundary.
Put access control where it belongs: firewalls, identity-aware proxies, and service auth. Let routing be routing.

Optimization that backfired: “make it simpler” by collapsing AllowedIPs

A different organization ran dozens of site-to-site WireGuard peers into a central hub. Each peer had carefully scoped AllowedIPs:
site A owned 10.10.0.0/16, site B owned 10.20.0.0/16, and so on. It worked, mostly.
Then someone proposed a “cleanup”: “Let’s just put 10.0.0.0/8 for every site, so we don’t have to update configs when a new subnet appears.”

The change sailed through review because it sounded operationally convenient. It also created overlapping AllowedIPs everywhere.
WireGuard then made deterministic choices: whichever peer had the most specific route for a given destination would win, but now many peers had the
same specificity. The hub’s selection became sensitive to config ordering and update timing.

The incident that followed wasn’t a total outage. It was worse: certain subnets intermittently routed to the wrong site, and the wrong site’s firewall
dropped them. Applications that retried “sometimes worked.” Monitoring lit up, but in patterns that looked like latency issues.
The network team spent days chasing phantom congestion.

Rolling back wasn’t immediate because some new sites had been deployed assuming the “simplified” configuration, and their routing depended on it.
The eventual fix was to restore non-overlapping ownership: each site got explicit prefixes, maintained by infrastructure-as-code.
The “optimization” cost more than the original manual work ever did.

Boring but correct practice that saved the day: explicit route tests in CI

The most resilient team I’ve worked with treated WireGuard configs like code, not like artisanal snowflakes.
Every change to a peer’s AllowedIPs or endpoint went through a pipeline that ran a small battery of route assertions.
Not fancy. Just unforgiving.

They had a test harness that booted a lightweight network namespace environment, applied the proposed config, and ran
ip route get checks for a set of critical destinations. It also inspected wg show allowed-ips for overlaps.
If a new prefix overlapped an existing one without an approved exception, the pipeline failed.

One Friday, a well-meaning engineer attempted to add a new remote subnet and mistakenly typed 10.0.0.0/8 instead of 10.80.0.0/16.
On a typical team, that becomes a weekend incident. Here, the CI job rejected it immediately with a readable diff: “Overlap with existing peer X; would redirect
traffic for these destinations.”

Nothing heroic happened. No pager went off. That’s the point.
The practice wasn’t glamorous, but it made routing changes boring—and boring is the most underrated feature in production systems.

One quote that still holds up: Hope is not a strategy — General Gordon R. Sullivan.

Common mistakes: symptoms → root cause → fix

1) “Handshake works, but I can’t ping anything behind the peer”

Symptom: wg show shows a recent handshake; ping to remote LAN hosts fails.

Root cause: missing forwarding on the remote gateway, or missing return route from remote LAN back to your VPN subnet.

Fix: enable IP forwarding on the gateway and add a route on the remote LAN router pointing your VPN subnet via the WireGuard gateway. Use NAT only if you must.

2) “Traffic still goes out my normal internet connection” (split tunnel surprise)

Symptom: expected internal service traffic via wg0, but ip route get shows dev eth0 (or Wi‑Fi).

Root cause: kernel route missing for the destination, or destination IP isn’t in any peer’s AllowedIPs.

Fix: ensure the destination prefixes are in the peer’s AllowedIPs and that routes exist (via wg-quick or manual ip route add).

3) “Full tunnel enabled, now the tunnel won’t come up”

Symptom: after setting AllowedIPs = 0.0.0.0/0, handshake fails or flaps.

Root cause: endpoint traffic is being routed into wg0 (tunneling the tunnel), often because the endpoint exception route is missing or wrong.

Fix: use wg-quick’s policy routing approach or add an explicit route for the endpoint IP via the physical gateway. Avoid endpoint hostnames that change without handling updates.

4) “Some subnets work, one subnet mysteriously goes to the wrong peer”

Symptom: traffic to 10.99.10.0/24 goes somewhere unexpected; other 10.99.x networks behave differently.

Root cause: overlapping AllowedIPs with longer-prefix match selecting a different peer.

Fix: remove overlaps or make them intentional and documented; verify with wg show allowed-ips.

5) “It works for IPv4, but IPv6 is broken (or leaking)”

Symptom: IPv4 reaches internal resources, but browsers time out on some sites; ip -6 route get shows non-VPN egress.

Root cause: you tunneled 0.0.0.0/0 but not ::/0, or DNS returns AAAA records that take a different path.

Fix: either fully support IPv6 through the tunnel (AllowedIPs + routes + firewall) or explicitly disable/blackhole IPv6 for that host when on VPN.

6) “Packets enter wg0 but never get replies”

Symptom: tcpdump on wg0 shows outgoing packets; no incoming replies.

Root cause: remote side drops inbound because source IP isn’t in its AllowedIPs (anti-spoofing check), or remote firewall blocks it.

Fix: ensure the remote peer’s AllowedIPs includes your source range(s). Then verify remote firewall allows traffic from the VPN subnet.

7) “Everything is slow, especially large downloads”

Symptom: small pings work; large transfers stall or crawl.

Root cause: MTU/PMTU mismatch causing fragmentation loss or blackholing ICMP “fragmentation needed.”

Fix: reduce wg0 MTU; test with ping -M do sizes; ensure ICMP is not blocked on the path.

8) “After a network restart, routing is wrong until I restart wg0”

Symptom: WireGuard is up, but kernel routes/policy rules are missing or altered after link changes.

Root cause: competing network managers (NetworkManager, systemd-networkd, custom scripts) rewriting routes and rules.

Fix: choose one authority. If you use wg-quick, ensure it’s the one applying rules; otherwise manage routes explicitly in your network stack.

Checklists / step-by-step plan

Checklist A: Build a correct split tunnel (site-to-site or internal access)

  1. Define the real destination prefixes (subnets, not wishful /32s). Include load balancers, backends, and DNS resolvers as needed.
  2. Set peer AllowedIPs to exactly those prefixes on the client side. Keep it non-overlapping whenever possible.
  3. Bring up the tunnel with wg-quick and confirm routes exist in ip route.
  4. Use ip route get to confirm kernel decisions for each critical destination.
  5. Verify remote side return routes for the client VPN subnet (or configure NAT deliberately).
  6. Document ownership: which peer “owns” which prefixes. Treat it like IPAM, because it effectively is.

Checklist B: Build a correct full tunnel (internet egress over WireGuard)

  1. Decide IPv4-only vs dual-stack. If you won’t tunnel IPv6, explicitly disable it or block it to avoid leakage and weird hangs.
  2. Set AllowedIPs to default routes: 0.0.0.0/0 and usually ::/0.
  3. Confirm policy routing exists: ip rule and a dedicated routing table that includes an endpoint exception route.
  4. Confirm server-side forwarding and NAT if the server is providing egress.
  5. Validate DNS behavior: ensure resolvers are reachable after the tunnel comes up and before it goes down again on reconnect.
  6. Run PMTU tests and set MTU explicitly if your environment is variable (LTE, PPPoE, nested VPNs).

Checklist C: Multi-peer hub configuration without surprises

  1. Make AllowedIPs non-overlapping by default. Overlaps require an explicit design note and tests.
  2. Use config management to generate configs consistently. Humans copy/paste errors faster than they fix them.
  3. Run automated overlap detection using wg show allowed-ips output validation.
  4. Test route selection with ip route get and packet capture for at least one IP per prefix.
  5. Plan endpoint reachability (static IPs, stable DNS, or explicit endpoint route exceptions).

FAQ

1) Is AllowedIPs a firewall?

No. It influences outbound peer selection and validates inbound source addresses. Use firewall rules for authorization.
If you “secure” access only by making AllowedIPs narrow, you’ll mostly just break routing.

2) Why does my WireGuard interface have an IP that isn’t in AllowedIPs?

Interface addresses (Address =) and AllowedIPs are different concepts. The interface address is what the local host uses as a source.
AllowedIPs is what each peer claims for routing and source validation. They must be compatible, but they are not the same knob.

3) Do I need to add routes manually when I change AllowedIPs?

If you use wg directly: yes, because it won’t touch routes. If you use wg-quick: it will add routes on up,
but changing AllowedIPs live may not update kernel routes the way you think. Treat changes as needing a controlled re-apply.

4) Why do I have “handshake but no traffic”?

Because handshake only proves UDP and keys work. Traffic can still fail due to missing kernel routes, missing forwarding, missing NAT,
remote firewall drops, or AllowedIPs source validation on the receiving side.

5) What happens if two peers have the same AllowedIPs prefix?

Longest prefix match picks the most specific prefix. If specificity ties, behavior depends on internal ordering and can be unstable under updates.
Don’t rely on ties; make ownership explicit.

6) Should I put 0.0.0.0/0 in AllowedIPs on the server?

Usually no. On a “server” that accepts many clients, you generally give each client a /32 (and maybe client-owned subnets).
Putting 0.0.0.0/0 on the server peer entry for a client tells the server to send all traffic to that client. That’s rarely what you want.

7) Why does adding ::/0 break things even when IPv6 “isn’t used”?

Because applications absolutely use IPv6 when it exists. If you route IPv6 into the tunnel but don’t provide IPv6 forwarding/DNS/egress on the far end,
you create a blackhole. Decide deliberately: support it fully or disable it.

8) Can I use AllowedIPs to implement “only these services go over VPN”?

Only if those services map cleanly to destination IP prefixes you control. If the service uses CDNs, dynamic backends, or redirects,
destination IPs will change and your routing intent will be violated. In those cases, use application-layer controls and treat VPN as transport.

9) Why does ping work but TCP doesn’t?

MTU and PMTU problems are the classic answer. ICMP is small; TCP with MSS/PMTU issues can stall. Also check stateful firewalls and asymmetric routing.
Confirm with PMTU pings and packet captures.

10) Is wg-quick “bad”?

No. It’s a solid tool. But it’s also a wrapper that modifies routes, rules, DNS, and sometimes firewall state.
In production, you either standardize on it and test its behavior, or you replace it with explicit orchestration.
The worst option is “we think we’re using wg-quick but other tools undo it.”

Conclusion: practical next steps

If WireGuard traffic doesn’t go where you expect, stop staring at the handshake.
Treat AllowedIPs as routing policy input, then verify the system layers in order:
kernel route decision, WireGuard peer selection, and remote return path.

Next steps you can do today:

  1. Pick one failing destination and run ip route get (and ip -6 route get) for it. Believe the output.
  2. Run wg show allowed-ips and hunt overlaps. If you find one you can’t justify, remove it.
  3. On gateway peers, verify forwarding and the return route. Fix those before touching MTU or DNS.
  4. Decide split vs full tunnel explicitly, including IPv6 behavior. Half decisions create full outages.
  5. Write down prefix ownership. Seriously. Your future self will thank you during the next “why is prod going to staging” moment.
← Previous
Debian 13: 100% iowait host freezes — find the noisy process/VM in 10 minutes
Next →
Proxmox “pve-cluster.service failed”: How to Bring the Cluster Stack Back Up

Leave a comment