The symptom: “VPN connected” and yet Slack still says you’re offline, your SSH times out, or DNS quietly leaks to whatever coffee-shop router you’re currently regretting. WireGuard is refreshingly simple, but networking has a talent for turning “simple” into “why is this packet crying?”
This is a production-minded guide to getting a WireGuard client running on Windows, macOS, and Linux—fast—plus how to diagnose the classic failure modes without turning your laptop into an accidental router.
1) The mental model: what WireGuard actually does on a client
WireGuard is not a magical “secure internet” switch. It’s a very small, very opinionated piece of software that creates a virtual network interface (usually wg0 on Linux, a “WireGuard Tunnel” adapter on Windows, a utun interface on macOS). When the tunnel is up, your OS routes some traffic into that interface. The WireGuard peer on the other side decrypts and forwards it onward (often to your corporate network, sometimes to the public internet).
The three levers you actually control
- Keys: who can talk to whom.
- AllowedIPs: what destinations are routed into the tunnel (and also how peers are matched on the receiver).
- Endpoint: where the other side lives right now (and whether NAT will keep it reachable).
Everything that hurts is usually one of these in disguise: routing decisions, DNS behavior, firewall/NAT state, or MTU. WireGuard is blunt: it sets up crypto and shoves packets. It will not negotiate routes like a chatty enterprise VPN; it will not “fix” your DNS unless you tell it to; it will not guess whether you intended split tunneling or full tunneling.
One operationally useful mindset: treat WireGuard like a point-to-point network cable that you plug in and out. If packets don’t traverse the cable, you inspect (1) link up (handshake), (2) L3 routing, (3) L4 reachability, (4) name resolution. Same order every time. Your future self will send a thank-you note.
One quote to keep you honest, from John Allspaw (paraphrased idea): Reliability comes from how you respond to failure, not from pretending failure won’t happen.
Joke #1: WireGuard is like a well-trained bouncer—quiet, efficient, and completely uninterested in your feelings about routing.
2) Interesting facts & context (so you stop fighting the tool)
These aren’t trivia for trivia’s sake. They explain why WireGuard behaves the way it does—and why some “traditional VPN expectations” don’t apply.
- WireGuard was designed to be tiny: far fewer lines of code than classic VPN stacks, aiming to reduce attack surface and operational complexity.
- It uses modern primitives by default: you don’t pick cipher suites; the protocol chooses a narrow, contemporary set (NoiseIK, ChaCha20-Poly1305, Curve25519, BLAKE2s, HKDF).
- It’s a layer-3 VPN: it routes IP packets (and can do IPv6 cleanly). It is not an L2 bridge unless you build that yourself on top.
- “AllowedIPs” is both routing and access control: on the receiving side, it’s the mapping that decides which peer a decrypted packet belongs to; overlaps can cause confusing behavior.
- Roaming is built-in: if your client’s public IP changes (Wi‑Fi to LTE), WireGuard can follow as soon as it sees authenticated traffic from the new address.
- NAT friendliness isn’t magic: it still depends on UDP state staying alive; that’s why
PersistentKeepaliveexists. - Linux-first, but cross-platform reality: the Linux kernel implementation is the reference for performance; other platforms use native integrations or a userspace implementation with OS-specific quirks.
- WireGuard became “mainstream Linux” quickly: it was merged into the Linux kernel (5.6 era), which is why it feels like a first-class citizen on modern distros.
- It intentionally avoids negotiation features: no IKE-style complexity. Great for humans. Sometimes uncomfortable for enterprise checklists.
3) Keys, peers, and the one file that matters
A WireGuard client configuration is a small INI-ish file. It has two stanzas: [Interface] (your local virtual interface) and one or more [Peer] entries (the remote endpoints you encrypt to).
Minimal client config (split tunnel example)
This is the shape you should start from. Then add complexity only when you have to. Especially in corporate networks, every extra line is a new way to lose an afternoon.
cr0x@server:~$ cat wg-client.conf
[Interface]
PrivateKey = <client-private-key>
Address = 10.8.0.2/32
DNS = 10.8.0.1
[Peer]
PublicKey = <server-public-key>
Endpoint = vpn.example.net:51820
AllowedIPs = 10.8.0.0/24, 10.20.0.0/16
PersistentKeepalive = 25
Interpretation, in plain ops English:
- Address: the client’s tunnel IP. The
/32is not a typo; it means “this interface owns only this IP,” which reduces accidental routing weirdness. - DNS: what resolver to use while the tunnel is up. Without it, you’ll often have “I can ping IPs but hostnames fail.”
- AllowedIPs: split tunnel list. Only those destinations will be routed into the tunnel. If you put
0.0.0.0/0(and maybe::/0), you are full-tunneling everything. - PersistentKeepalive: keep NAT state alive for clients behind restrictive NATs (hotels, LTE). 25 seconds is the common “just works” number.
Two rules that prevent most outages
- Do not overlap AllowedIPs between peers on the same interface unless you enjoy nondeterministic peer selection.
- Decide split vs full tunnel upfront and encode it explicitly. “We’ll just add 0.0.0.0/0 and see” is how you break printing, VoIP, and your own patience.
4) Windows client setup (simple, with the usual footguns)
Recommended approach: official WireGuard for Windows app
On Windows, be boring: use the official WireGuard client UI. It installs a WireGuard tunnel adapter and manages routes cleanly. You can import a config file, toggle the tunnel, and it survives reboots without creative scripting.
Steps that work in real companies
- Import the config: use “Import tunnel(s) from file”. Don’t hand-type keys. Humans are bad at base64.
- Name the tunnel something explicit: “corp-split” or “prod-breakglass.” “test” becomes permanent, and then it becomes your identity.
- Enable on-demand only if you understand Windows networking: it’s easy to create “connected but unusable” states when networks change.
- Check DNS behavior: Windows has a long history of DNS suffix search lists and resolver precedence doing unexpected things when multiple interfaces exist.
Windows pitfalls you should anticipate
- Handshake works, traffic doesn’t: often a route or firewall issue, not WireGuard itself. Windows can have multiple default routes with metrics that surprise you.
- DNS leaks or wrong resolver: if you don’t set
DNSin the config, Windows keeps using whatever it feels like (often the physical NIC’s DNS). - Adapter priority/metrics: sometimes Windows insists on sending traffic outside the tunnel because the physical interface has lower metric.
- Captive portals: hotels and guest Wi‑Fi can block UDP or require web login; the tunnel won’t come up until the portal is satisfied.
5) macOS client setup (app vs wg-quick, and the DNS traps)
macOS gives you two common options: the official WireGuard app (recommended) or command-line tooling (via third-party packages). For most people who want reliability, the app wins: it integrates with macOS’s network extension system and handles interface lifecycle without you duct-taping launch agents.
Use the app unless you have a reason not to
Corporate Wi‑Fi and frequent sleep/wake cycles are where “almost works” VPN tooling dies. The app tends to recover better from network changes and reassert DNS/route state.
macOS-specific DNS realities
macOS DNS is not “one file, one resolver.” It’s a stack with per-interface resolvers, search domains, and scoped resolution. A WireGuard tunnel can be up and functioning, while your browser still resolves internal names through the wrong resolver because of resolver order or missing search domains.
If your company uses split DNS (internal domains resolved only via internal DNS), you should set both:
- DNS server (e.g., 10.8.0.1), and
- Search domains (e.g.,
corp.example), when your client supports it.
Without search domains, users type short hostnames and get public DNS answers. That’s not a “user error”; it’s a predictable configuration outcome.
macOS pitfall: sleep/wake and stale routes
After wake, you may see a tunnel “connected” but no traffic. Usually it’s UDP state or a route table that didn’t update cleanly. Toggling the tunnel often fixes it; so does setting PersistentKeepalive for roaming clients.
6) Linux client setup (wg-quick, NetworkManager, and routing reality)
Linux is where WireGuard feels like it was always meant to exist. You have the kernel module, wg tooling, and wg-quick which is a pragmatic wrapper that sets up the interface, routes, and DNS hooks.
The “just make it work” path: wg-quick
You put a config in /etc/wireguard/wg0.conf and run wg-quick up wg0. That’s it. The best part is that it’s reversible: wg-quick down wg0 cleans up routes and interface state.
NetworkManager integration: good, but know what it does
If you’re on a laptop with NetworkManager, you can import the WireGuard config and let it manage the tunnel. This can be excellent for roaming, but it adds another layer that may override routes or DNS depending on distro and version. When debugging, always be clear about who “owns” the interface—wg-quick or NetworkManager—not both.
Linux pitfalls that bite experienced engineers
- Policy routing interactions: modern systems can have multiple routing tables (especially with container/network plugins). Your tunnel routes might land in the wrong place if you’re not careful.
- Reverse path filtering:
rp_filtercan drop legitimate tunneled traffic in asymmetric routing scenarios. - iptables/nftables NAT: for client setups this is usually irrelevant, but for “client that also forwards” or “client as gateway,” you must explicitly NAT and forward.
- systemd-resolved: DNS handling can surprise you if
resolv.confis a stub and you expectDNS=to just work without the right hooks.
7) Practical tasks: commands, expected output, and decisions (12+)
These are the checks I actually run when someone says “WireGuard is up but nothing works.” Each task includes: command, what the output means, and what decision you make next.
Task 1: Confirm interface exists and is up (Linux)
cr0x@server:~$ ip link show wg0
7: wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/none
Meaning: Interface wg0 exists and is UP. mtu 1420 is typical. state UNKNOWN is normal for tunnels.
Decision: If it’s DOWN or missing, your tunnel isn’t up. Fix service/UI first. If it’s up, move on to handshake and routing.
Task 2: Check WireGuard handshake and traffic counters
cr0x@server:~$ sudo wg show
interface: wg0
public key: 5Z5Jw0y5mOQ4rGqgXwq8yW9q0l1bQp3m5G5m6l8P8xk=
private key: (hidden)
listening port: 51820
peer: 9sZxq2Qb7D2i2c8p9n5m0tTn0aVY7mXlCkQw1kVv0m4=
endpoint: 203.0.113.10:51820
allowed ips: 10.8.0.0/24, 10.20.0.0/16
latest handshake: 18 seconds ago
transfer: 12.34 MiB received, 9.87 MiB sent
persistent keepalive: every 25 seconds
Meaning: You have a recent handshake and packets moving. If handshake is “never,” you likely have endpoint/UDP/firewall/NAT issues.
Decision: If handshake is recent but transfer stays at 0, suspect routing/AllowedIPs/firewall on the far side.
Task 3: Validate your client’s tunnel IP
cr0x@server:~$ ip addr show dev wg0
7: wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000
inet 10.8.0.2/32 scope global wg0
valid_lft forever preferred_lft forever
Meaning: Client has 10.8.0.2/32 assigned. Good.
Decision: If missing or wrong, fix the config Address and bring the tunnel down/up.
Task 4: Confirm routes installed for AllowedIPs
cr0x@server:~$ ip route show table main | grep wg0
10.8.0.0/24 dev wg0 scope link
10.20.0.0/16 dev wg0 scope link
Meaning: Split-tunnel routes exist. Traffic to those CIDRs goes into wg0.
Decision: If routes are absent, either wg-quick didn’t run, or another manager removed routes, or you used a method that doesn’t auto-add them. Fix before touching DNS.
Task 5: See which route will be used for a specific destination
cr0x@server:~$ ip route get 10.20.5.10
10.20.5.10 dev wg0 src 10.8.0.2 uid 1000
cache
Meaning: The OS will send packets to 10.20.5.10 through wg0.
Decision: If it says dev eth0 (or Wi‑Fi interface), your AllowedIPs/routes are wrong. Fix routing first; don’t “debug WireGuard” when it’s not being used.
Task 6: Test reachability by IP (bypass DNS)
cr0x@server:~$ ping -c 2 10.20.5.10
PING 10.20.5.10 (10.20.5.10) 56(84) bytes of data.
64 bytes from 10.20.5.10: icmp_seq=1 ttl=63 time=28.3 ms
64 bytes from 10.20.5.10: icmp_seq=2 ttl=63 time=27.9 ms
--- 10.20.5.10 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 27.933/28.110/28.288/0.177 ms
Meaning: L3 reachability works through the tunnel.
Decision: If ping by IP fails but handshake exists, suspect remote firewall, missing routes on server side, or MTU. If ping works but names fail, go to DNS tasks.
Task 7: Test a TCP service path (because ICMP lies)
cr0x@server:~$ nc -vz 10.20.5.10 22
Connection to 10.20.5.10 22 port [tcp/ssh] succeeded!
Meaning: TCP/22 reachable; security groups/firewalls likely permit it.
Decision: If ping works but TCP fails, you have a firewall/service-level issue, not “VPN broken.” Escalate to the service owner or security policy.
Task 8: Check DNS resolver selection (Linux with systemd-resolved)
cr0x@server:~$ resolvectl status
Global
Protocols: -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
resolv.conf mode: stub
Link 7 (wg0)
Current Scopes: DNS
Protocols: +DefaultRoute
Current DNS Server: 10.8.0.1
DNS Servers: 10.8.0.1
Meaning: DNS for wg0 is set to 10.8.0.1. Good.
Decision: If DNS server is still your Wi‑Fi router, expect internal name failures and leaks. Fix DNS in config or resolved settings.
Task 9: Prove internal DNS works (and see where it’s going)
cr0x@server:~$ dig @10.8.0.1 internal-api.corp.example A +short
10.20.5.10
Meaning: The internal resolver returns an internal address.
Decision: If this works but normal dig internal-api.corp.example fails, the issue is resolver selection/search domains, not the server.
Task 10: Check MTU and path MTU blackholes
cr0x@server:~$ ping -M do -s 1380 -c 2 10.20.5.10
PING 10.20.5.10 (10.20.5.10) 1380(1408) bytes of data.
1388 bytes from 10.20.5.10: icmp_seq=1 ttl=63 time=28.9 ms
1388 bytes from 10.20.5.10: icmp_seq=2 ttl=63 time=29.1 ms
--- 10.20.5.10 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
Meaning: At least ~1408-byte ICMP packets with DF set make it through. Good sign.
Decision: If you see “Frag needed and DF set” or timeouts at larger sizes, lower the WireGuard MTU (often 1280–1420 range). MTU issues often present as “SSH works but Git clone stalls” or “web loads half a page.”
Task 11: Confirm UDP reachability to the endpoint (client-side)
cr0x@server:~$ nc -vu -w 2 vpn.example.net 51820
Connection to vpn.example.net 51820 port [udp/*] succeeded!
Meaning: Your system can send UDP to that host/port. It does not guarantee replies (UDP), but it rules out some local egress blocks.
Decision: If this fails on a corporate network, you may need a different egress policy or a different endpoint/port. Don’t waste time tuning Keepalive if UDP is blocked outright.
Task 12: Inspect the firewall state (Linux client)
cr0x@server:~$ sudo nft list ruleset | sed -n '1,80p'
table inet filter {
chain input {
type filter hook input priority 0; policy accept;
}
chain forward {
type filter hook forward priority 0; policy drop;
}
chain output {
type filter hook output priority 0; policy accept;
}
}
Meaning: Forwarding is dropped (common default), but for a client that’s fine. Input/output accept means the client itself can communicate.
Decision: If output is restricted, WireGuard may handshake but traffic to internal subnets might be blocked. Adjust rules carefully or use a host firewall profile that permits the tunnel.
Task 13: Verify sysctl that can break routed tunnels (rp_filter)
cr0x@server:~$ sysctl net.ipv4.conf.all.rp_filter
net.ipv4.conf.all.rp_filter = 1
Meaning: Reverse path filtering is in “strict” mode (1). This can drop traffic in asymmetric routing situations.
Decision: For normal client use, it’s usually okay. For complex split routing or multi-homing, consider setting to 2 (loose) on relevant interfaces after understanding the security implications.
Task 14: Confirm what DNS file you’re actually using
cr0x@server:~$ ls -l /etc/resolv.conf
lrwxrwxrwx 1 root root 39 Jan 2 10:11 /etc/resolv.conf -> ../run/systemd/resolve/stub-resolv.conf
Meaning: You’re on systemd-resolved stub. Editing /etc/resolv.conf directly will not persist and may not do what you think.
Decision: Configure DNS through resolvectl, NetworkManager, or your WireGuard manager, not by hand-editing stub files and hoping.
8) Fast diagnosis playbook (first/second/third checks)
If you only memorize one workflow, make it this. It finds the bottleneck quickly without cargo-culting config changes.
First: is the tunnel actually exchanging packets?
- Check the interface exists and is up.
- Check latest handshake and transfer counters.
- If handshake is never: focus on endpoint, UDP reachability, NAT, and keepalive.
Second: is the OS routing the right traffic into the tunnel?
- Confirm routes exist for your intended destinations (AllowedIPs).
- Use a route lookup for a known internal IP (
ip route geton Linux). - If traffic isn’t routed into the tunnel: fix AllowedIPs or route manager conflicts.
Third: can you reach something by IP, then by name?
- Ping an internal IP. Then test a TCP port (SSH/HTTPS) because ICMP can be allowed when TCP isn’t.
- If IP works but names fail: fix DNS (server, search domains, resolver priority).
Fourth: MTU and “works for small things” failures
- If chat works but file transfers hang: suspect MTU/fragmentation.
- Lower MTU on the client tunnel and retest.
Fifth: the far side (server routing/NAT/firewall)
- Client can only send encrypted packets. The server decides whether to route them onward.
- If handshake and client routing look fine, the server may be missing routes back to the client subnet or is dropping forward traffic.
9) Common mistakes: symptom → root cause → fix
1) “Connected” but no handshake
Symptom: Client UI says active, but latest handshake is “never.”
Root cause: Wrong endpoint host/port, UDP blocked, captive portal, or NAT mapping expires instantly.
Fix: Verify endpoint, test UDP egress, complete captive portal login, add PersistentKeepalive = 25 for roaming clients, and ensure server is actually listening on that port.
2) Handshake works, but no traffic reaches internal networks
Symptom: Handshake updates, but you can’t ping any internal IP; transfer counters barely move.
Root cause: AllowedIPs doesn’t include the internal subnets, or routes weren’t installed, or Windows route metric prefers physical interface.
Fix: Add the correct internal CIDRs to AllowedIPs on the client. Confirm routes. On Windows, check the route table and metrics; ensure the tunnel routes are more specific than the default route when split tunneling.
3) IP connectivity works, but hostnames fail
Symptom: You can ping 10.20.5.10 but ssh internal-api fails to resolve.
Root cause: DNS not set, search domain missing, systemd-resolved/NetworkManager not applying DNS to the tunnel, or DNS is set but blocked by policy.
Fix: Set DNS = ... in the client config and ensure your platform applies it. On macOS, ensure the tunnel’s DNS resolver is actually being used. Add search domains where supported.
4) Everything works except “big” transfers (Git, Docker pulls, file copies)
Symptom: Small pings succeed, some websites load, but large downloads stall or hang.
Root cause: MTU/PMTU blackhole; fragmentation blocked somewhere between client and server.
Fix: Lower the tunnel MTU (try 1380, then 1360, then 1280). Retest with DF pings. Prefer a consistent MTU across client fleet when possible.
5) Split tunnel accidentally became a full tunnel
Symptom: Printing breaks, Zoom gets weird, local devices disappear, public IP changes to your office egress.
Root cause: AllowedIPs includes 0.0.0.0/0 (and possibly ::/0), or an OS-level default route was added.
Fix: Remove default-route AllowedIPs. If you need selective internet routing, do it intentionally with policy routing—don’t pretend a default route is “just temporary.”
6) Two peers, one destination: traffic goes to the wrong place
Symptom: Some internal subnets intermittently route to the “wrong” peer; behavior changes after reconnect.
Root cause: Overlapping AllowedIPs between peers on the same interface.
Fix: Make AllowedIPs disjoint. If you need failover, implement it with a higher-level mechanism; don’t rely on ambiguity.
7) Works on Wi‑Fi, fails on LTE/hotspot
Symptom: At home it’s fine; on a phone hotspot it handshakes once then dies.
Root cause: NAT mapping expires quickly; roaming changes endpoint; keepalive absent.
Fix: Set PersistentKeepalive. Confirm the server allows roaming (most do by default). Consider moving the server to a network with stable UDP handling.
8) Windows says connected, but internal traffic goes out the wrong NIC
Symptom: Internal routes exist but aren’t used; route lookup shows the Wi‑Fi interface.
Root cause: Competing routes and metrics; sometimes a broader route with better metric wins on Windows.
Fix: Ensure your tunnel routes are correctly specific and metrics behave. Avoid pushing broad routes that overlap existing corp routes unless you mean it.
10) Three corporate-world mini-stories (things you’ll recognize)
Story A: the incident caused by a wrong assumption
A mid-sized company rolled out WireGuard to replace an aging VPN. The pilot was smooth: engineers on Linux laptops, clean home networks, a single internal subnet. The config used split tunneling and an internal DNS server. It felt like success.
Then the wider rollout hit Windows users. Support tickets came in: “VPN connected, internal hostnames broken.” The network team assumed DNS was “part of the VPN,” because that’s how the old client behaved. The WireGuard config files didn’t include a DNS directive, because nobody noticed during the pilot—Linux users already had split DNS set locally via other tooling.
Operations tried to “fix” it by adding hostfile entries for a few critical services. That bought a day and created an archaeology site: stale IPs, inconsistent behavior, and a growing dependency on what should have been a resolver problem.
The real fix was boring: define DNS behavior explicitly in the client config (and add search domains where needed), then verify per-OS that the resolver selection actually changes when the tunnel comes up. The lesson wasn’t “WireGuard is tricky.” The lesson was “your expectations are inherited from a different product.”
Story B: the optimization that backfired
A security team wanted to reduce egress complexity. Their idea: full-tunnel all laptop traffic through a central WireGuard gateway, then apply DLP and logging at one choke point. It’s a common corporate instinct: if the network is messy, funnel it through a pipe and call it governance.
The rollout “worked,” in the sense that handshakes happened and traffic flowed. What failed was everything around it: latency-sensitive apps, local network access, and anything relying on geographically close CDNs. Video calls started pinning to distant egress, and internal complaints turned into executives doing that special thing where they ask for a “status update” every hour.
The team then “optimized” by increasing MTU to squeeze performance. On some paths it helped; on others it created PMTU blackholes. Now they had a double failure mode: some users had general slowness, others had stalls and half-loading pages. The tunnel wasn’t down; it was worse—it was inconsistently broken.
The recovery plan was to stop pretending one tunnel policy fits everyone. They moved to split tunneling for most users, kept full-tunnel only for high-risk roles, standardized a conservative MTU, and documented what “full tunnel” actually breaks. It wasn’t as ideologically neat. It was operationally sane.
Story C: the boring but correct practice that saved the day
A financial services team ran WireGuard for vendor access to a restricted environment. The setup was unglamorous: a small set of peers, static AllowedIPs, strict change control, and a monthly “prove it still works” drill where someone validated handshake, routes, DNS, and one or two application flows.
One Monday, a vendor reported total failure: no internal access. The client showed “connected,” but application traffic died. The team didn’t start with guesswork. They followed their own checklist: handshake status, counters, route lookup, DNS resolution, then MTU sanity checks. It turned out to be none of those.
The actual issue was upstream: a firewall change blocked inbound UDP to the WireGuard port from a specific partner ASN. Because they had a routine drill, they also had baseline outputs for “healthy.” The absence of any handshake made the problem obvious and kept the argument short with the firewall team.
The fix was a targeted policy adjustment. No heroics. The value wasn’t the tooling; it was having a repeatable verification habit and a shared definition of “working.”
11) Checklists / step-by-step plan
Client setup checklist (all OSes)
- Get a clean config from the server side (or generate it): client private key, client tunnel IP, server public key, endpoint, and AllowedIPs.
- Decide tunnel policy: split tunnel (recommended for most) or full tunnel (only when you need it and accept the blast radius).
- Set DNS explicitly: internal resolver and search domains if your environment uses short hostnames.
- Add keepalive for roaming clients:
PersistentKeepalive = 25. - Bring the tunnel up.
- Verify handshake and counters.
- Verify routing for one internal IP.
- Verify DNS resolution for one internal name.
- Test one real application path (HTTPS to an internal endpoint, SSH, or whatever matters).
- Document the intended behavior (what subnets are reachable, whether internet is tunneled, what DNS should be used).
Windows step-by-step (client-only)
- Import the config into the WireGuard app.
- Confirm
AllowedIPsis what you intend (split vs full). - Ensure
DNSis set if you need internal resolution. - Activate the tunnel.
- If “connected” but useless: check route table and DNS behavior before touching keys.
macOS step-by-step (client-only)
- Import the config into the WireGuard app.
- Enable the tunnel and confirm it survives sleep/wake.
- If internal names fail: check whether the tunnel’s DNS resolver is actually being used and whether you need search domains.
- If it breaks only on some networks: add keepalive and consider MTU reduction.
Linux step-by-step with wg-quick
- Place config at
/etc/wireguard/wg0.conf, permissions locked down. - Bring up the tunnel.
- Verify handshake and routes.
- Verify DNS integration (systemd-resolved vs resolv.conf reality).
cr0x@server:~$ sudo install -m 600 wg-client.conf /etc/wireguard/wg0.conf
cr0x@server:~$ sudo wg-quick up wg0
[#] ip link add wg0 type wireguard
[#] wg setconf wg0 /dev/fd/63
[#] ip -4 address add 10.8.0.2/32 dev wg0
[#] ip link set mtu 1420 up dev wg0
[#] ip -4 route add 10.8.0.0/24 dev wg0
[#] ip -4 route add 10.20.0.0/16 dev wg0
[#] resolvconf -a wg0 -m 0 -x
[#] iptables -I OUTPUT -o wg0 -j ACCEPT
Meaning: wg-quick created the interface, applied config, added address and routes, and attempted DNS hooks. (Hook method varies by distro.)
Decision: If you don’t see routes being added, your config’s AllowedIPs may be empty/incorrect, or you’re using a different manager. Fix that before escalating.
Joke #2: If your VPN problem disappears when you lower MTU, congratulations—you’ve discovered that the internet still runs on vibes and duct tape.
12) FAQ
Q1: What does “AllowedIPs” actually do?
On the client, it’s the route selection list: destinations that should be sent into the tunnel. On the receiver, it’s also peer selection: which peer “owns” a given source/destination range. Treat it as both routing intent and an access control map.
Q2: Should I use 0.0.0.0/0 on clients?
Only if you want a full tunnel and you’re ready for the side effects (local network access, geolocation/CDN shifts, latency, corporate choke points). Split tunnel is operationally calmer for most engineering organizations.
Q3: Why do I see handshakes but no traffic?
Because a handshake only proves the two peers can exchange authenticated messages. It does not prove your OS is routing traffic into the tunnel, nor that the server routes traffic onward. Check routes, AllowedIPs, and server-side forwarding/firewall.
Q4: Do I need PersistentKeepalive?
For roaming clients behind NAT (most laptops off-prem), yes. If you’re on stable networks and always initiating traffic, maybe not. In practice, 25 seconds is cheap insurance.
Q5: Why does DNS keep leaking outside the tunnel?
Because DNS is its own subsystem and your OS may prefer the physical interface’s resolver unless you set DNS explicitly and your client manager applies it correctly. Fix DNS selection, not WireGuard crypto.
Q6: What’s the right MTU for WireGuard?
There isn’t one universal value. 1420 is common. If you see stalls, try 1380, then 1360, then 1280. The “right” MTU is the largest that doesn’t blackhole packets on your worst network path.
Q7: Can I have multiple peers in one client config?
Yes, but be disciplined: no overlapping AllowedIPs. If you need failover, design it explicitly; don’t rely on ambiguous overlaps and hope WireGuard “chooses the best.”
Q8: Why does it work on Wi‑Fi but fail on hotel networks?
Hotels love captive portals, UDP throttling, and weird NAT timeouts. Complete portal auth first, then use keepalive. If UDP is blocked outright, you’ll need a different network or a policy-approved alternative endpoint.
Q9: Is WireGuard “more secure” than older VPNs?
It’s secure when configured correctly and benefits from a smaller, more auditable design. But security is a system property: keys management, endpoint hardening, and routing policy matter just as much as the protocol.
Q10: What’s the simplest way to avoid self-inflicted outages?
Keep configs minimal, standardize templates, avoid overlapping AllowedIPs, set DNS intentionally, and validate with a repeatable checklist. Complexity doesn’t scale; operational habits do.
13) Practical next steps
- Pick your tunnel policy: split tunnel for most users; full tunnel only for roles that truly need centralized egress control.
- Standardize a client config template: include DNS and keepalive defaults; keep AllowedIPs explicit and documented.
- Create a “known-good test”: one internal IP ping, one TCP service check, one internal DNS name lookup. Use the same three tests across Windows/macOS/Linux.
- Adopt the fast diagnosis playbook: handshake → routing → IP reachability → DNS → MTU → server-side forwarding.
- Write down what “working” means: reachable subnets, expected DNS behavior, and whether internet should be tunneled. This prevents the next incident caused by assumptions.
If you do all of that, WireGuard becomes what it promised: a small, sharp tool that stays out of your way—until you need it, at which point it behaves predictably. Predictable is the best compliment production systems ever get.